WIKIMEDIA, BAINSCOUFor clinical purposes, next-generation sequencing (NGS) has all but replaced its methodological predecessor, Sanger sequencing. It is faster. It is cheaper. But is next-gen sequencing alone sensitive and specific enough to catch every difficult-to-detect, disease-associated variant while avoiding false-positives?
“There is significant debate within the diagnostics community regarding the necessity of confirming NGS variant calls by Sanger sequencing, considering that numerous laboratories report having 100% specificity from the NGS data alone,” Ambry Genetics Chief Executive Officer Aaron Elliott and colleagues wrote in a study published last week (October 6) in The Journal of Molecular Diagnostics.
Elliott and colleagues simulated a false-positive rate of zero when comparing the results of 20,000 hereditary cancer, NGS panels—including 47 disease-NGS alone, the researchers “missed [the] detection of 176 Sanger-confirmed variants, the majority in complex genomic regions (n = 114) and mosaic mutations (n = 7),” they reported in their paper.
In an interview with The Scientist, Elliott lamented a lack of quality-control guidelines regarding confirmatory sequencing methods among diagnostic labs.
The Scientist: What prompted this particular analysis?
Aaron Elliott: The debate within the diagnostic industry as far as the need to confirm variants. Every lab kind of has their own stance on if something like Sanger confirmation is needed. And the debate is getting very heated as companies offer cheaper and cheaper tests. As you keep dropping the price of testing, it’s very hard to keep these [confirmatory] methods around. . . . So different labs are coming out with different stances on the need to do this, and it’s really that [some] labs are trying to have, basically, rock-bottom pricing.
We wanted to go back and look at our own internal data where we start the test by Sanger-confirming every next-generation sequencing variant that is not benign—your variants of unknown significance, your likely pathogenic, and your pathogenic variants. We did that on 20,000 samples. . . . It’s about a 2 percent of real mutations is what you would miss if you did no Sanger confirmation at all.
Basically, the results [showed] that, number one, if you don’t Sanger-confirm calls at all, you’re . . . going to report out false calls, or, if you set your thresholds based on low sample numbers, you’re going to miss calls.
TS: Your team still uncovered some false-positive results, and noted that these “were not evenly distributed across all genes as would be expected if they were random PCR or sequencing artifacts.”
AE: We looked at 47 genes in those 20,000 samples. And false-positives aren’t in every gene: they were in 20 of the 47 genes that we looked at. And on top of that, they are in specific genomic regions that are difficult to sequence. But those are the genomic regions that also have real mutations in them, as well. So if you were to do a 1,000-sample validation—which is a pretty big validation—and you looked at the same 47 genes that we looked at, you would see about five false-positives.
TS: Why aren’t there guidelines for confirmatory methods?
AE: A lot of labs don’t want to do it. It’s a whole ’nother workflow in the lab, it costs a lot of money . . . and it increases the turnaround time on the test by about two days if you have to confirm something. . . . There’s more and more pressure to get these tests out faster and faster and cheaper and cheaper.
TS: After adjusting your analyses to simulate zero false positives, your team reported missing 2.2 percent of clinically relevant mutations. Was that surprising at all?
AE: I didn’t think it was surprising. Our philosophy is to start with the most-sensitive assay, the most sensitive bioinformatics pipeline you can possibly start with when you begin your testing, which does require more Sanger confirmation. But it does allow you to pick up more mutations that you would have missed or would have been filtered out. A good example of that is mosaic mutations. In the study there were seven mosaic mutations that we would have missed if we were not Sanger-confirming calls.
TS: You and your colleagues propose quality thresholds for next-gen sequencing–based diagnostic screens. For all screens that don’t meet these, are you recommending confirmation by Sanger sequencing or some other method?
AE: Yeah, it doesn’t necessarily have to be Sanger confirmation. There are other methods that you could use. For deletions/duplications, you could use MLPA [multiplex ligation-dependent probe amplification] or array. You can use qPCR for certain tests. . . .For anything that doesn’t meet those specific thresholds does need to be confirmed.
Those thresholds cannot be accurately determined unless you have tens of thousands of samples.
TS: Your group reported spending $1.9 million to include Sanger sequencing confirmation for 20,000 samples. Considering the added cost of confirmatory screening, do you think other diagnostic labs will heed your group’s recommendations?
AE: You know, I don’t know. It’s hard to say. . . . Definitely when you look at diagnostic labs, there’s definitely different tiered-quality testing. There are certain labs that we believe are high-tier that go above and beyond in quality, and then you have your cheaper labs. And it’s your cheaper labs whose business models are to drive down [the cost of] genetic testing and, in order to do that, you need to eliminate these particular assays, which keep the sensitivity very, very high.
This type of testing—especially the type of testing that this paper is based on, hereditary cancer testing—is not really the time to cut corners. People make very big decisions based on these results. . . . I don’t think this is the time to try to save a couple hundred dollars by not confirming your next-generation sequencing calls. The data show that it needs to be done.
TS: What do you hope comes of this study?
AE: I hope people look at studies like this and understand that all next-generation sequencing tests on the market are not the same—they’re not created equal. People need to really understand how companies do their testing. Companies need to be transparent on the quality control that they have in place for testing.