In 2000, the type 2 diabetes drug Rezulin was withdrawn from the market after several dozen patients required a liver transplant or died due to liver failure. While the vast majority of drugs that make it to market are safe, such high-profile failures show that the drug development process is far from flawless.
The good news is that the number of targets has increased, as has the supply and demand for high-throughput assays. However, this new revolution in biomedical science has not resulted in a pipeline brimming with new drugs. While chemical entities that show biological activity against a given target can become lead compounds, those compounds still need to be turned into viable drugs.
The typical compound entering a Phase I clinical trial has been through roughly a decade of rigorous pre-clinical testing, but still only has an 8% chance of reaching the market.1 Some of this...
IN SILICO STRATEGIES
Computational toxicity predictions are not routinely used in preclinical discovery. These models are usually expensive, costing in the hundreds of thousands for annual licenses, service contracts, and support. Some of them are based on limited proprietary data sets and are thus not reliable at predicting structure-activity relationships of novel compounds.
It is possible to improve upon the efficiency and the predictive power of the computational models by increasing the amount of data used to create them, but it may not happen. The computational models are usually built by using some data to generate a relationship (the training set) while other data are reserved as a test set. The larger and more diverse the training sets, the higher the accuracy of the predictions. Pharmaceutical companies have massive amounts of biological data from synthesis. If pooled together, the result could be algorithms that more effectively predict activity and toxicity.
"In the interest of advancing the field, pharmaceutical companies need to share the already available data," says Joe Votano, president of R&D at ChemSilico, Tewksbury, Mass. Steve Muskal, CEO of San Diego, Calif.-based Sertanty, agrees. The more powerful models could improve both the predictive power of the models as well as save time and resources, according to Muskal. Both ChemSilico and Sertanty are involved in developing software to aid drug discovery.
As biomedical knowledge grows and computing power increases, predictive power of the in silico models will improve. Muskal believes that by 2010, Big Pharma will be forced to use in silico tools and the driving factors will be drugs coming off patent protection, thinning pipelines, and the demand for more and cheaper drugs from an aging population.
In fact, the FDA has mined its databases to develop structure-activity relationship software to help identify molecular substructures with potentially negative toxicologic properties early in the development process.1 These models could help identify toxic molecules even before they are synthesized, which would be the ideal scenario.
IT'S ALL IN THE LIVER
Some toxicity prediction systems combine the power of genomics and bioinformatics. Their premier tool is the DNA microarray, which can simultaneously monitor the expression levels of hundreds to thousands of genes from a variety of sources including tissues from animals, human blood and tissue samples, and cultured cells that have been treated with experimental compounds. Correlating the changes in gene expression with microscopic and macroscopic changes in organs such as the kidney and the liver can lead to identification of gene signatures that predict toxic effects in rats.
Companies like Iconix, in Mountain View, Calif., and Gene Logic, are marketing prediction systems based on gene-expression patterns seen in rats in response to exposure to drugs. Mark Fielden, senior toxicologist at Iconix, says their system "can actually predict kidney damage in rats three weeks before it can be seen microscopically."
This can reduce the amount of time needed to study long-term organ damage in rats by making predictions based on short-term exposure. Advantages of these systems include the introduction of automation and high-throughput screening capability into the toxicity-screening arena. However, it remains to be seen whether these systems can also be used to bridge the gap between data in rodents and the response of humans in clinical trials.
Fielden adds that they are working on generating gene-expression data from human peripheral blood mononuclear cells treated with various compounds. This would help predict human gene expression responses to toxic compounds. Mendrick notes that Gene Logic is attempting to use rat gene-expression data to predict human toxicity.
In vitro cell-culture testing makes use of miniscule amounts of product and is valuable for gene-expression studies. Since the majority of the known toxicants affect the liver, there has been an emphasis on predicting liver toxicity.
Human hepatocytes can be used for toxicity screening but these are only available when livers are harvested for liver transplants, so their supply is erratic and unpredictable. When cultured they can only survive for a few days. The other problems with this approach are that it is not possible to observe the effect of the metabolites of the molecules under investigation or the systemic changes caused by the compounds.
Alternatives to the use of hepatocyte cultures include the use of liver slices to assay for liver-specific toxicity and the use of liver spheroids. Liver slices provide a better estimation of liver toxicity. However, there is a lack of reproducibility between slices, and limited viability in culture. Spheroids are three-dimensional structures that are formed when hepatocytes are cultured in the presence of hepatic stellate cells. They are functionally closer to the intact liver and thus offer a better system for metabolism and toxicity studies than cultured hepatocytes. Liver spheroids can be viable for a few weeks in culture. Liver slices and spheroids are not easily available commercially and are therefore not widely used.
Even though it is still early days for toxicogenomic approaches, they might ultimately provide sensitive and predictive safety assessment. Looking to the future, FDA has started collecting genomic data through the Voluntary Genomics Data Submission program, according to Frueh. Currently, most genomic data are exploratory in nature, and the FDA does not require the submission of these data. Voluntary submissions would help the regulatory scientists understand the data and be prepared to review them in future submissions.
BUILDING A BETTER RAT?
Part of the problem with traditional animal testing is that the genetic uniformity of the animals – highly prized in terms of reproducible results – may be an Achilles heel in terms of finding toxic side effects. Dose curves are generated in rats that are all genetically identical, a far cry from the variety found in the human population, says David Threadgill, assistant professor of genetics at University of North Carolina, Chapel Hill.
To address this issue, PhysioGenix has developed panels of multigenic rats that capture over 80% of the genetic diversity found in the rat genome. They are marketing these rats and several pharmaceutical companies are currently using them for preclinical toxicity studies, says Jacob.
Eight drugs that passed safety trials in animals, but were subsequently found to be toxic in humans, were tested on these multigenic rats, Jacob notes. All of the drugs displayed indications of toxicity. This approach does not involve any radical changes in the way drug discovery and development is being carried out right now. The process does not save on time or cost but might improve accuracy.
Predictive toxicology technologies have value says Frueh, as they can be used to prioritize lead compounds and may even be used to modify some compounds. In the coming years, the emphasis is going to be on speeding up the drug-discovery process and moving the toxicology studies earlier in the process.
The consensus in the industry and the FDA seems to be that both toxico-genomic and in silico modeling approaches will have increasingly important roles to play in preventing painful withdrawals of drugs from the market.
Snaking the Pipeline for New Drugs
Peter B. Kaplan/Photo Researchers, Inc
Toxicity doesn't have to be bad. Harnessing the power of animal toxins such as snake venom to develop beneficial compounds is an attractive concept, but turning them into easily administered, side-effect-free drugs has so far proven problematic.
Manjunantha Kini and colleages at the National University of Singapore (NUS) are isolating unique proteins, several of which have been licensed by Singapore biotechnology firm ProTherapeutics, where Kini is also chief scientific officer.
Structure-function relationships have guided compound development. In the early 1990's Kini and his then supervisor, the late Herbert Evans, found that proline was more common in regions flanking the interaction sites than anywhere else on the protein and concluded that it plays a structural role.
Proline's unique side chain, which bonds to its backbone nitrogen, disrupts α-helix or β-sheet propagation, "like a parent protecting a teenager from peer pressure," says Kini. The structural kinks proline creates make the interaction site stand out making it more readily available for interaction.
Based on these principles Kini and Evans developed a robust, simple, and direct method to identify interaction sites directly from the amino-acid sequence.1 "We can identify the interaction sites in two minutes and test to see if the prediction is correct in one to two weeks," says Kini.
Putting snake venom in the cross hairs of this technology yielded two pipeline compounds for ProTherapeutics: an anti-coagulant from the venom of the rough-scaled snake, and an analgesic derived from king cobra venom. The short peptides, only 9–13 amino acids long, show the biological function of the parent molecule but are too small to be antigenic. Moreover, addition of flanking proline residues can increase potency as much as 10- to 15-fold says Kini.
Their small size means that these compounds show all the signs of being deliverable sub-lingually, absorbed through the mucus membranes of the mouth. This would be a significant step in the drug-discovery process, says Jay Fox, director of the biomolecular research facility at the University of Virginia. "A peptide itself may not have the pharmacodynamic and pharmacokinetic properties that a drug company wants. That's where the real difficulty lies," he says.
Prialt, for example, Elan's FDA-approved analgesic derived from conus snail venom, must be delivered intrathecally and has serious side effects including hallucination. The first drug in the ProTherapeutics pipeline is the analgesic. Peter Wong Tsun Hon, associate professor at the NUS department of pharmacology, says he has left behind the neurotoxic sites found in the parent molecule, but Phase I clinical safety trials won't begin for nearly a year. "We know it works in animals and have done the initial toxicology studies but have so far done this only with mice," says Kini.
- Jane Parry