Worldwide, millions of animals are used for toxicity testing of compounds intended for human and environmental use. Now, toxicologists have developed software that can accurately predict the outcomes of these assays.
The researchers compiled information from public databases including PubChem and the US National Toxicology Program on 10 million chemical structures and existing chemical safety data to develop an algorithm that was at least as reliable than animal testing itself. The tool was 87 percent accurate in predicting animal testing results, while repeating the animal tests was only, on average, on-target 81 percent of the time. The results were published this week (July 11) in Toxicological Sciences.
“There is no doubt that this is an innovative approach,” Fiona Sewell, a program manager in toxicology and regulatory sciences at the National Centre for Replacement, Refinement, and Reduction of Animal in Research in the U.K. who was not involved in the work, writes in an email to The Scientist. “Time will tell whether, in practice it delivers a reliable alternative to methods based on experimental animals.”
“I am extremely optimistic about this and other similar tools to limit animal testing,” says Andrew Rowan, chief scientific officer for The Humane Society of the United States who was also not involved in the work. “Using animals to predict human safety is significantly flawed and very expensive. It takes three years to do comprehensive testing while a tool like this takes minutes.”
For a fee, Illinois-based Underwriters Laboratories—an independent, global safety science company that partly funded this work—has already made the Hopkins team’s software available to companies wanting to screen their products prior to submitting safety data to regulatory agencies for review.
I started promoting animal testing alternatives 42 years ago and never dreamed that I would, within my career span, be able to predict the end of most animal testing, but that goal is now in sight.—Andrew Rowan, The Humane Society of the United States
Many countries including the United States have regulatory agencies oversee new chemicals for commercial and environmental use and for consumer products, requiring the submission of at least some safety data. At the same time, many countries also are pushing to limit the use of animals in producing those data.
In 2008, the US National Institutes of Health, the Environmental Protection Agency (EPA), the National Toxicology Program (NTP), and the Food and Drug Administration (FDA) together initiated Tox21 to develop more efficient and timely non-animal toxicity testing. In 2013, the European Union put in place a ban on animal testing for cosmetic products and the European Chemicals Agency (ECHA) encourages alternatives to animal testing. And in 2016, the US government passed an update of the Toxic Substances Control Act (TSCA), which states that federal agencies need to help reduce and replace animal safety tests that industry conducts and submits to regulators.
To develop their new software tool, Thomas Hartung, director of the Center for Alternatives to Animal Testing at Johns Hopkins University, and his colleagues initially showed that of the 9,801 chemical substances they analyzed, those with similar structures generally had similar safety data. The researchers used a database of profiled chemicals that were made publically available through the ECHA, following the enactment of the REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) law, enacted in 2007, which mandates that companies identify and disclose safety and risk information on the substances they manufacture and sell.
For the current work, the team added data from additional databases to have the algorithm make 50 trillion pairwise comparisons of 10 million compounds. Using the available safety data, including that from animal testing, the developers then built a model that they compared to safety results from six animal tests for each chemical.
Their analysis revealed redundancy in animal testing. Two chemicals were each independently tested more than 90 times and the databases contained data on 69 chemicals that were each tested more than 45 times, often independently by different companies.
The multiple, independent test results were valuable in developing this tool, which showed “a high degree of uncertainty among the animal test results,” Christopher Grulke, a chemoinformatics scientist at the EPA, writes in an email to The Scientist.
To come up with the algorithm, the team incorporated both the animal testing results and 74 chemical property categories to come up with their safety predictor model. Overall, the software was as good at predicting the safety results of a chemical as was the animal tests, and in some cases, the software did a better job.
There are limits to this method, which has not been shown to reliably predict more-complex toxicities that can manifest in the long-term, including the risk that a chemical compound causes cancer, says Hartung. “Such methods may or may not prove to be as predictive for long-term complex toxicology,” adds Grulke.
If the results of this digital chemical-similarity analysis are combined with additional biological data that can uncover the mechanisms of toxicity, “we could do a much better job of predicting human hazards and risks than using animal testing, which should be much more appealing to regulatory agencies that using modeling alone,” said Rowan.
According Grulke, the agency “supports moving to non-animal approaches as they prove their applicability to chemical safety decision making,” and has been working towards this goal internally. The EPA is currently evaluating this new software along with additional algorithms from other research groups. These tools were all provided to the EPA at a recent Acute Toxicity Workgroup at the US Department of Health and Human Services and are part of a global effort to minimize animal testing.
According to Hartung, the FDA is also in the process of analyzing and testing this new software.
Rowan is encouraged by the new effort. “This is a relatively inexpensive way to test chemicals and I would like to see lots of people use this tool to predict toxicity outcomes. I started promoting animal testing alternatives 42 years ago and never dreamed that I would, within my career span, be able to predict the end of most animal testing, but that goal is now in sight, and with better outcomes for humans.”
Hartung says the team is now working to refine its algorithm and include data on the biological effects of compounds to incorporate not just acute toxicity but also more complicated safety endpoints. “This won’t be the end of all animal testing,” says Hartung. “But this is an important step to take the bite out of it.”
T. Luechtefeld et al., “Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility,” T Toxicol Sci, doi.org/10.1093/toxsci/kfy152, 2018.