Erica P. Johnson
Imagine being able to discover the latest blockbuster drug using nothing but a PC and some highly sophisticated software. It's not as far-fetched as it sounds. A growing number of labs – both industrial and academic – are going "in silico," simulating everything from cells to clinical trials. The result is a sea change in pharmaceutical research, with resources once earmarked for bench work now being shunted into central processing unit clock cycles.
This shift in focus is due largely to the inefficiency of traditional drug development. Time and time again, billions of research dollars are used on promising drug candidates that fail in late-stage clinical trials. "This industry is built on success, of course, but it's shaped by failure," says John Savin, executive director of Oxford, UK-based Physiomics. Computer modeling firms believe adoption of their methods can help drug companies weed out failures early and focus...
GUIDING DRUG DEVELOPMENT
Entelos is just one of many companies staking claim to the in silico market. Pharsight of Mountain View, Calif., simulates different trial designs to help pharmaceutical researchers determine the most cost-effective clinical trial strategy. Pharsight senior scientific advisor Jaap Mandema explains that traditionally, clinical trials are built around a single hypothesis and companies do not always adapt trial strategies to the information that comes out of each trial phase.
Pharsight's Trial Simulator™, though, incorporates preclinical data on the compound of interest, clinical data from competing drugs, and market data to build quantitative, probabilistic simulations of different trial designs. "You get information on safety, efficacy, biomarkers, and other aspects early on that have to support a variety of decisions that you need to make at the end of that trial," he says. "We show the relationship between the design of the particular trial and how well you can... make particular decisions on the basis of the information that comes out of the trial."
Simulation can also aid the early experimental stages of drug development. The BioAnalytics Group of Hightstown, NJ, designs virtual cell-based assays to help researchers design experiments; rather than replacing bench work entirely, the company's Model-Based Assays™ serve as an adjunct to traditional experimentation.
Millions of simulated experiments can be run in the time it takes to do a single laboratory experiment, helping researchers determine which conditions should be physically tested in the laboratory, says cofounder G. Scott Lett. Additionally, Lett says, scientists can filter data through a simulated experiment to facilitate analysis in situations in which the volume of actual experimental data makes interpretation difficult. He adds that pharmaceutical companies have used model-based assays successfully to prioritize drug compounds and to determine the mechanism of action of drug candidates.
MODELING AT THE CELLULAR LEVEL
Courtesy of Genomotica
Modeling glucose and ethanol metabolism in Saccharomyces cerevisiae using Genomatica's SimPheny™ platform. SimPheny enables the development of predictive models of biological systems that provide the context for the integration of 'omics' data.
Ithaca, NY-based Gene Network Sciences (GNS) uses "mountains" of microarray and proteomics data to create large-scale, molecular-level simulations of cancer cells. GNS takes the data a company has on a drug and incorporates it into the simulation to help clients determine if the drug may have unwanted side effects or if, and how, it can be further optimized to work against a particular target.
Gene targets of unknown functionality can also be incorporated into the simulation, and models can predict in vivo responses to drugs based on data provided by clients. "The key is that we're able to create accurate simulations of the biological systems that they're testing and doing their analysis on, by utilizing the data that they're already generating," says cofounder Iya Khalil.
Physiomics takes a somewhat different approach. The company builds simulations of complex biological networks using software modules called "SystemCells," each of which represents a component of the pathway such as an enzyme, gene, promoter, organelle, or cell. Combining SystemCells allows the user to create a virtual pathway that can be interrogated with drug candidates, for example.
In the past year, says Physiomics' John Savin, the company built a representation of how a normal mammalian cell grows and divides and simulated the development of cancer in this cell. "We can show how we can treat that cancer cell with various combinations of drugs," he says.
San Diego-based Genomatica's SimPheny platform has been used to develop cellular-level models of metabolism for bioprocessing and drug development applications. Christophe Schilling, CEO, compares Genomatica's approach to that of modeling a transportation network. "The first step that we always have in a model is to try to figure out what are the streets and the roads that exist, which amounts to defining what are all the reactions that can take place inside the cell," he explains. The company draws from experimental data from functional genomics and metabolic biochemistry for a number of organisms and pathogens.
Bernhard Palsson of the University of San Diego, a cofounder of Genomatica, says that while models of metabolism and transcriptional regulation have been around for decades, the company is the first to build such models on a genome scale. And, he adds, these genome-scale models are the first to accurately predict whole cell behavior. Microbiologist Derek Lovley of the University of Massachusetts, Amherst, for instance, collaborated with Genomatica to develop a model of the extremophile
TOWARD THE VIRTUAL CELL
A number of academic consortia are approaching whole cell modeling in a bottom-up manner, reducing cellular processes to equations and adding complexity as more proteomics, metabolomics, and genomics knowledge is gained. Their goal is to create models that can be used to facilitate experimental design as well as test the potential effects of drugs.
Scientists at the University of Connecticut Health Center, Farmington, developed the Virtual Cell (VCell), a computational framework that enables cell biologists to create models based on experimental data, plug in hypotheses, and make predictions about cellular processes based on the results. These predictions can then be tested experimentally, and the model parameters adjusted if needed.
The VCell is not an actual model of a cell, but a means of mathematically analyzing experimental results to build models that can predict the outcomes of new experiments, says VCell developer Les Loew. "It's really a tool for dealing with individual cell biological processes that are too complicated to deal with without a computer," he adds.
Other consortia are trying to simulate whole cells. Project CyberCell, based at the University of Alberta's Institute for Bio-molecular Design, aims to model a living
Courtesy of Leslie M. Loew
Calcium dynamics in a neuronal cell are modeled using the Virtual Cell software environment. The panels on the left indicate the models accessible to this user from the Virtual Cell Model Database, while the workspace to the right shows windows for specifying the physiology of the system, defining the geometry and initial conditions of the model, and analyzing results.
"We actually track the position and the activity and the speed of every biomolecule within the cell, and we essentially let the computer do all the work and then through statistical analysis tell us what the result is," says Michael Ellison, executive director. Although it takes considerable computational power to build a cell model like this from the bottom up, Ellison is optimistic that a model of a simplified cell incorporating only core metabolic processes can be developed within five years; a more detailed model could take twice as long, says Broderick.
The E-Cell project, led by Masaru Tomita of the Institute for Advanced Biosciences, Keio University, Japan, began in 1996 with the design of a hypothetical self-surviving virtual cell composed of 127 genes from the
An international consortium called Silicon Cell also aims to make "a precise computer replica of a real cell," according to project leader Hans Westerhoff, director of the Center for Research on BioComplex Systems, Free University Amsterdam. Unlike the CyberCell and E-Cell projects, the Silicon Cell limits the model to cases in which kinetic parameters are precisely known, rather than fitting parameters to the model based on experimental data. "The Silicon Cell does not fit at the individual enzyme level, at the system level. It uses previously determined kinetic properties of the components to calculate what happens," says Westerhoff. "So, it will actually refute hypotheses quite readily and demand that more experimental research is needed to establish what really is going on."
Westerhoff says that the immediate goal of projects like Silicon Cell is to use knowledge about individual molecules in the cell to elucidate how the cell functions as a whole. "You make science testable. And of course, when it is testable it is also utilizable," Westerhoff says. For instance, drug developers can use the model to calculate a particular drug's effects, or those of genetic polymorphisms and deletions. But he says that although in silico models should be able to replace actual laboratory experiments in theory, users in reality would never trust a computer to replace experimentalists completely. "But it's an additional force in the game," he concedes.
WORKING FROM THE TOP DOWN
Some researchers take a different approach to the problem of cell modeling, using heuristic information to limit the complexity of the cell and to make simulation more tractable. "The bottom-up approach to systems biology that tries to model all the parts and processes with differential equations alone is just not realistic. No one in their right mind would try to model a computer with differential equations alone," says Eric Werner, CEO of Cellnomica.
"The main difference between top down and bottom up, in my opinion, is the level of abstraction and the type of assumptions which are made," explains Gordon Broderick of the University of Alberta. In a top-down approach, the designer dictates the large-scale behavior of the system, but in a bottom-up system, macroscopic behavior arises naturally from the interaction between the model's basic elements. As a result, he says, "A bottom-up approach is better poised to provide insight through unexpected results that are not preprogrammed into the problem."
Werner likens in silico cellular modeling to a hypothetical situation in which a person living in the 16th century attempts to reverse engineer a passenger jet. One way to approach the problem would be to break the airplane into its smallest component parts, analyzing and modeling each one in detail to get a better understanding of how the individual pieces work. But without an understanding of aerodynamics, this approach offers no information about how the airplane works as a whole. In the case of a cell or multicellular system, the process is even more complicated because a cell cannot be broken into its component parts while retaining knowledge of how those parts fit together in the first place. Trying to reverse engineer a cell or an organism from its parts alone, Werner says, would be computationally impractical.
Cellnomica therefore adopts a top-down approach to multicellular simulation, which allows the integration of bottom-up methods and data. Minimal models of whole multicellular systems are built using both low-level information about regulatory pathways and networks as well as high-level knowledge gleaned from developmental biology and an assumption that cells and multicellular organisms have some degree of organization and modularity. "Our approach is to build models of the whole object and not just the parts. The parts get their meaning by the role that they have in the organization that makes up the whole object," says Werner.
"Top-down" can also refer to efforts to link cell models to whole organ, or even whole organism, simulation. James Bassingthwaighte, of the University of Washington, and colleagues aim to model a cardiac cell in which all proteins exist in a stable state (i.e., there is no protein turnover), which can be studied by changing various conditions such as oxygen and substrate concentration. Such a simulation can then be used to examine rates of energy flow, regulation of substrate utilization, and the like. The goal of this "eternal cell" project is to incrementally integrate signaling pathways, protein synthesis, and regulation into the model and to eventually study phenomena like myocardial infarction at the cellular level.
The eternal cell project is part of a broader effort to model the heart at the physiologic – as opposed to the single-molecule-level, as well as part of the Physiome Project, which Bassingthwaighte started in 1990. Bassingthwaighte says he is interested in linking cellular metabolism in the myocardium with energetics, electro-physiology, muscle contraction, and ion channel governance. "We have to understand this central aspect of metabolism and energetics before we can really understand how changes in gene transcription influence that central behavior," he says.
A COMMON LANGUAGE
With the growing availability of cellular models, one factor may be overlooked: use of these models by a wide audience. Hamid Bolouri of the Seattle-based Institute for Systems Biology explains that a typical simulation paper contains a complex set of data, equations, and insights from a variety of published sources, all rolled into a custom-built software package. But it can be difficult to understand and apply such models to other problems. In 1999, Bolouri initiated a consortium to develop a common modeling language, called the Systems Biology Markup Language; today 30 to 40 development projects conform to this standard.
Such an undertaking could make simulations available to a larger research community and facilitate collaborations between laboratories, says Bolouri. But the modeling community still needs to focus its efforts on establishing simulation as integral to molecular biology, he adds. "There's sort of a constant balancing act to do between things that are good for the community and for the long term, and the need to develop in the short term more infrastructure, more basic research."
In the pharmaceutical industry, at least, the message is out. "People are accepting that these models will be useful, that they can be built, and they will do something useful," says Genomatica's Palsson. Ten years ago, he explains, people might have laughed at the idea. But now, the volume and complexity of available data, coupled with some early successes, have lent legitimacy to the field. "I hear people say now from companies, that it's not a question of if they're going to build these models, it's a question of when."
Aileen Constans can be contacted at
The BioAnalytics Group
Gene Network Sciences
The Silicon Cell
Systems Biology Markup Language