Merging IT and Biology

Much of the promise of bioinformatics likely lies with the big money and novel approaches of the private sector. At a late October symposium titled "Biosilico 2000," several bioinformatics company executives came together at the Trump Plaza in New York City to discuss the state of the immensely expansive and increasingly heterogeneous field of bioinformatics. Not surprisingly, many touted their products and business plans; but they also discussed and compared philosophies for engaging in the dau

Nov 27, 2000
Eugene Russo

Much of the promise of bioinformatics likely lies with the big money and novel approaches of the private sector. At a late October symposium titled "Biosilico 2000," several bioinformatics company executives came together at the Trump Plaza in New York City to discuss the state of the immensely expansive and increasingly heterogeneous field of bioinformatics. Not surprisingly, many touted their products and business plans; but they also discussed and compared philosophies for engaging in the daunting task of applying information technology to biology, chemistry, and medicine.

Sponsored by Scientific American magazine, the symposium, the first of a series to be held annually, addressed the crucial dilemma faced by these companies, many of them only a few years old: How will they and their customers keep from drowning in an ever-increasing sea of raw genomics data that, largely a result of the Human Genome Project, has inundated biology and chemistry?

Several of the sessions focused on how companies plan to integrate data on genomics, proteomics, and even cellomics (studying cell function and drug impact at the level of the cell) to make the drug discovery process faster and more efficient. Jurgen Drews, chairman of International Biomedicine Management Partners Inc. in Basel, Switzerland, noted recent studies that suggest that pharmaceutical companies have actually gotten less efficient despite the influx of new technologies and hoards of gene and protein data. "In terms of providing novel compounds per year, industry has not been productive in the last ten years," Drews remarked. "If anything, productivity has decreased."

Paul Sweetnam, vice president of research technology at Bayer in West Haven, Conn., likened the situation to baseball: "Where else can you bat under 300 and make over $1 million?"

Drews attributed the lackluster performance to an "innovation deficit," a stagnant repertoire of methods of the early and mid-1990s. According to his 1997 analysis, all drug therapy was based on the existence of 500 molecular targets, most of them enzymes, hormones, ion channels, or nuclear receptors. Genomics will likely add thousands more targets to the mix, said Drews, but it has not happened yet.

Stephen Friend, CEO of Rosetta Inpharmatics in Kirkland, Wash., suggested that making drug discovery more efficient requires an entirely new philosophy. He contended that genomic and proteomic approaches have thus far had little impact on increasing the rate at which lead compounds get Food and Drug Administration approval. And he suggested that the bulk of the benefit thus far has come in the form of terabytes of information--new targets and sequence data--that "scientists working in drug discovery have little way to integrate" or fully interpret.

According to Michael Fannon, vice president of Human Genome Sciences in Rockville, Md., the recent focus on sequence data, which involves huge volumes of the same type of information, has taken attention away from the real challenge to bioinformatics. "The variety of information adds to complexity of the bioinformatics problem substantially, probably even more so than just sheer quantities themselves," he commented.

Friend argues that the same process has driven the world of drug discovery for 500 years, namely the "empiric method." Researchers gather information by repeatedly making knockouts in scores of model animals, and testing the results--a brute force approach that can work but is incredibly expensive. Acknowledging that knowing how each protein interacts with every other protein is an enormously complex, decade-long task, Friend believes the protein or transcript level is not the first priority. Rosetta has devised tools to make perturbations in cells, collect information on the thousands of changes that result, and find out what it all means based on huge databases.

During a panel discussion that invited audience input, one symposium attendee questioned whether such tools and methods would actually alleviate drug discovery bottleneck, or whether the ongoing development of new technologies would instead slow down companies and investigators. According to D. Lansing Taylor, president and CEO of Cellomics in Pittsburgh, such seemed to be case with some pharmaceutical companies that he had dealt with some years ago. They had assumed that a combination of genomics and ultra-high-throughput screening would narrow the "innovation deficit."

Said Taylor, "I think that was maybe more wishful thinking than reality." Yet he believes that the field of cellomics will, in fact, be the key. "I think that what's going to happen is with additional tools, whether proteomics, cellomics, or modeling the whole system, that the paradigm in drug discovery will change by necessity so that it will no longer be a highly segmented serial process," he said. "There'll be a lot of feedback."

Jeremy Levin, CEO of Physiome, emphasized that bioinformatics must come into play to help the industry find novel drugs for even basic ailments. He noted that in the last 24 years, there have been no new classes of antibiotics despite the exponential increase in pharmaceutical activity. Even with increasing problems of antibiotic resistance, pharmaceutical companies have not sought hard-to-find antibiotics; now there's a projected seven-year gap without novel, orally effective antibiotics for simple earaches. Said Levin, "The low-hanging fruit in drug discovery is gone."

 

Eugene Russo can be contacted at erusso@the-scientist.com.