Testing treatments on mini tumors may save time in identifying which therapies work best, a new study shows.
Enrolling the right patient population could be key to a successful clinical trial.
June 1, 2015|
© ROCCO BAVIERA/IKON IMAGES/CORBISIn late 2011, the outlook for AstraZeneca’s cancer drug Lynparza (olaparib) was grim. After an interim analysis revealed disappointing results in a Phase 2 trial, the company ceased development of the ovarian cancer drug. While the oral medication did delay disease progression by a median three and a half months, there was no significant effect on overall survival (N Engl J Med, 366:1382-92, 2012), so the company decided not to pursue Phase 3 trials.
But a closer look at a subset of the Phase 2 trial participants convinced AstraZeneca researchers to forge ahead after all: for patients with BRCA mutations, the drug had delayed progression by nearly twice as long as the overall patient population (Lancet Oncol, 15:852-61, 2014). Last December, based on further data regarding the drug’s efficacy in patients with BRCA mutations who had undergone chemotherapy at least three times, the US Food and Drug Administration (FDA) granted accelerated approval to Lynparza for this subgroup, contingent on the success of continuing Phase 3 trials.
The trials and tribulations of Lynparza’s path to market are not unique. Patients suffering from any disease are a heterogeneous group, and traditional clinical trials that simply average the effects of drugs across all participants can muddy the results, sometimes missing positive effects in a subset of patients. But in recent years, improved ability to collect and analyze genetic and proteomic data has researchers increasingly interested in developing targeted therapies only for those patients who are likely to benefit. According to a recent FDA white paper, 45 percent of FDA approvals in 2013 were for targeted therapies requiring genetic or other tests to determine which patients would benefit from them or be able to take them safely, compared with just 5 percent in the early 1990s. In January, the Obama administration announced a $215 million Precision Medicine Initiative that will fund agencies such as the National Institutes of Health and the FDA to develop and evaluate targeted therapies.
Parsing patient heterogeneity and shepherding targeted therapies into the clinic come with unique challenges, however. For example, as biomarkers become more complex and numerous, researchers will have the hard task of deciding how to divide the patient population. “As we move forward, it is going to be increasingly difficult to do trials, because soon every cancer patient will have an orphan disease,” says Donald Berry, a professor of biostatistics at the University of Texas MD Anderson Cancer Center in Houston. Greater stratification of the population also means a smaller market, making it hard for companies to recoup the billions of dollars it costs to develop a drug.
Meanwhile, analyzing drugs’ effectiveness in treating precise subpopulations requires new trial designs. Researchers must avoid statistical pitfalls while navigating still-evolving FDA requirements for both demonstrating therapeutic efficacy and validating accompanying diagnostic tests. Below, The Scientist examines how clinical researchers are adapting clinical trial design in this new era of precision medicine.
Drug companies have historically conducted clinical trials out of a few major medical centers, recruiting a new set of patients for each therapy under investigation. But because specific biomarkers are often present in just a small percentage of the population, recruiting participants from a limited area is not a sustainable model for companies pursuing precision medicines. Instead, investigators testing targeted therapies are increasingly turning to collaborative efforts to recruit patients nationwide, or even internationally.
As we move forward, it is going to be increasingly difficult to do trials, because soon every cancer patient will have an orphan disease.
University of Texas MD Anderson Cancer Center
The US National Cancer Institute (NCI) last year established the National Clinical Trials Network, composed of a group of hospitals and medical centers in North America that will serve as trial sites. Not only will the network make it easier to recruit specialized subsets of patients to cancer therapy trials, it will also expand the potential patient pool to rural areas of the country. “We have no option but to be more collaborative,” says Lisa McShane, a biostatistician at the NCI. “It’s good for everybody.” Later this year, the NCI will use the new network to launch its NCI-MATCH (Molecular Analysis for Therapy Choice) trial, which will sequence patients’ lymphomas and advanced solid tumors and place participants in appropriate trials.
In late 2010, the nonprofit Cancer Research UK founded a similar initiative, called the Stratified Medicine Programme (SMP), which takes advantage of a network of 18 cancer centers across the United Kingdom. The SMP’s first trial, called the National Lung Matrix Trial, began in March 2015 and will assess the efficacy of eight drugs in development at Pfizer and AstraZeneca for treating non–small cell lung cancer. The researchers will target the therapies to 18 different genetic abnormalities determined by a gene panel developed by Illumina. “The Matrix Trial is . . . an umbrella program which basically screens large numbers of patients to generate the numbers necessary to go into what are often quite small cohorts,” explains Gary Middleton, head of the new trial and a professor of medical oncology at the University of Birmingham.
The European Prevention of Alzheimer’s Dementia (EPAD) consortium will apply the same wide-reaching strategy to construct a 24,000-patient registry of people from across Europe judged to be at risk for dementia. From this registry, EPAD researchers will select 6,000 people for closer monitoring, based on factors including as-yet-undetermined biomarkers that put them at the highest risk of dementia progression, then funnel 1,500 at a time into a clinical trial, projected to launch in 2016. By targeting preclinical patients and designing trials that will test multiple therapies for different dementia subtypes, the EPAD aims to overcome over a decade of high-profile dementia drug failures, says EPAD coordinator Craig Ritchie, who studies the psychiatry of aging at the University of Edinburgh. “EPAD is an attempt to rip up the rule book completely.”
Each of these large-scale trial networks, while resource intensive to set up, could cut down on the legwork required to launch clinical studies in the future, seamlessly feeding patients into trials testing novel therapies and identifying new biomarkers. “The whole project is to create a needed infrastructure in order to be able to have this perpetual trial,” José Luis Molinuevo, a national lead of EPAD based at the Barcelonaβeta Brain Research Center in Spain, says of the Alzheimer’s initiative.
In some cases, there is such strong evidence that a treatment will only benefit a subset of patients that it would arguably be unethical—never mind a waste of time and resources—to test it on biomarker-negative patients. Other times, however, researchers do choose to trial targeted therapies on relatively broad patient populations to confirm that a treatment works in the way that preclinical research has suggested. And in some cases, the therapies can have surprising effects in nontarget patient populations, says NCI’s McShane. “If you lock in too early it’s very difficult to go back.”
Genentech’s Herceptin (trastuzumab), for instance, is a monoclonal antibody that binds to HER2 growth receptors on cancerous cells, preventing them from undergoing uncontrolled proliferation. Assuming the therapy would help the 20 percent of breast cancer cases where tumor growth is enhanced by an overabundance of HER2 receptors, researchers primarily tested it in these patients. But after the drug had been approved, researchers uncovered some results that made them wonder if Herceptin might be useful in a broader range of patients than previously thought.
The original trials of Herceptin as a treatment for metastatic breast cancer tested HER2 levels in a central laboratory. But a later trial investigating the drug’s use as a secondary therapy to prevent recurrence relied on multiple local labs to measure HER2 levels. In a reanalysis of samples after the later trial had concluded, the researchers found that 10 percent of participants identified as having elevated HER2 by the local labs had been misclassified. The more rigorous test revealed that, while HER2 was in some cases overexpressed, the receptor levels were below the cutoff to be considered HER2-positive in these patients. Nevertheless, many of these patients had benefited from trastuzumab therapy. Researchers are now testing the efficacy of trastuzumab combined with chemotherapy in a Phase 3 clinical trial of breast cancer patients whose tumors have only slightly elevated HER2 levels.
To capture all patient populations that might benefit from a therapy, there is a new type of trial, the so-called adaptive trial, that enables researchers to start out testing multiple groups but change course based on interim results. Midway through a study, researchers can increase sample size for subgroups of patients who are doing particularly well on a therapy, for example, or adjust a treatment regimen to more thoroughly investigate doses that appear to be working best, explains MD Anderson’s Berry, who in 2000 cofounded Berry Consultants to help companies design adaptive clinical trials. “The usual thing is you do treatment A versus treatment B and close your eyes, and at the end of your study open your eyes and look at the data, and you’re surprised almost always with what you see,” says Berry. “Sometimes you say, ‘Gee, things are happening that I could have taken advantage of.’”
Of course, there’s a reason trials have traditionally been blinded: to avoid bias. To maintain that same scientific rigor in adaptive trials, adjustments are predetermined at the beginning of the study as a series of branching if-then situations, and cannot be tweaked based on researchers’ whims once the trial is underway.
Currently, Berry is co-principal investigator for the I-SPY 2 trial, a government, academia, and industry collaboration to test combinations of targeted therapies for their ability to shrink breast tumors prior to surgery in patients with 10 different biomarker combinations. Therapy-biomarker pairings that do poorly will eventually be eliminated from the study, while those that appear to be working will trigger the assignment of more patients with the relevant biomarkers to the treatment group. In this way, researchers can screen a wide variety of therapies for efficacy in diverse patients, without spending the time and money they would to run a separate study on each therapy. Therapy–patient group combinations that continue to do well will “graduate” from the study and may be tested independently in Phase 3 trials. One combination of two agents, carboplatin and veliparib, that was evaluated in the study is now being tested in a Phase 3 involving patients with triple-negative breast cancer, i.e., breast cancer negative for elevated estrogen, progesterone, and HER2 receptors.
A number of other trials, including the National Lung Matrix Trial and the EPAD Alzheimer’s trial, will also have adaptive designs. But adaptive trials are still new and require caution, says Berry. “The bane of our approach is false positives. We’re looking at lots of drugs; we’re looking at lots of subsets of patients. When you look at lots of things, you see some things that aren’t real.”
Researchers seeking to develop targeted therapies need to start thinking early about their communications with regulatory bodies such as the FDA. The agency has in recent years issued guidelines for dealing with adaptive trials, noting that it is particularly important that investigators provide a thorough explanation of how they will ensure that the study will remain blinded even as the trial is adjusted based on intermediate outcomes. Typically, this requires designating people who are largely independent from the primary personnel to assess interim results and make prespecified adjustments.
It is also possible that sponsors of trials involving particularly unfamiliar adaptive designs may need to plan for extra discussion with FDA statisticians to assure that their plan will pass muster. “If a sponsor is considering the use of an adaptive design for a trial intended to support registration, then the sponsor should discuss this plan with [the FDA] early in the protocol development stage,” Lisa LaVange, director of the agency’s Office of Biostatistics, writes in an e-mail to The Scientist.
In addition, it is important to talk to the FDA early about any diagnostic tools that will need to accompany a new therapy. Historically, the FDA has exercised discretion in scrutinizing diagnostics developed in individual labs. “Unlike therapies, which go through a very rigorous regulatory process and get approval by the FDA to be marketed, for laboratory tests the situation has been quite different,” McShane says. But last fall, the agency released draft guidance stating that it plans to become stricter about validating even familiar tests. With the rise of targeted therapies, “the [test] result matters,” says McShane. “It’s going to ake a difference in what treatment the patient gets.”