A group of image-analyzing algorithms designed by Boston-based startup FDNA can diagnose certain genetic diseases based on people’s faces, according to a study published yesterday (January 7) in Nature Medicine. The algorithms, collectively termed DeepGestalt by the company, rely on deep learning and computer vision to identify patterns in facial photos of patients and identify which of multiple possible genetic mutations could be behind a person’s condition.
The approach “is clearly not perfect,” says FDNA’s chief technology officer, Yaron Gurovich. “[But] it’s still much better than humans are at trying to do this.”
DeepGestalt powers the company’s app, Face2Gene, which has been freely available to health care professionals since 2014. Doctors have already started using the technology as an aid, according to Nature, although the tool is not intended to be used to provide a definitive diagnosis.
The current study was designed to demonstrate...
The algorithms achieved a 64 percent accuracy, substantially higher than the 20 percent expected if DeepGestalt were working by chance and better than the success rate for clinicians who, according to a 2010 study of two experienced researchers, can’t identify the mutation from facial phenotype alone.
Bruce Gelb, an expert on Noonan syndrome at the Icahn School of Medicine at Mount Sinai who was not involved in the work, tells STAT that the technology’s accuracy is “impressive.” However, he questions the value of the approach, given that genetic testing for this and other conditions is becoming routine in many parts of the world. “I don’t know why they undertook this, exactly,” he tells STAT. “It’s inconceivable to me that one wouldn’t send off the panel testing and figure out which one it actually is.”
See “Exome Sequencing Helps Crack Rare Disease Diagnosis”
In a separate experiment, DeepGestalt was presented with 502 images of patients with 92 different syndromes, and successfully included the true syndrome among its top 10 possibilities for 91 percent of the cases. Christoffer Nellåker, a computational biologist at the University of Oxford who was not involved in the work, tells New Scientist that the approach could help cut diagnosis times for rare diseases.
“The real value here is that for some of these ultra-rare diseases, the process of diagnosis can be many, many years,” he says. “This kind of technology can help narrow down the search space and then be verified through checking genetic markers.”
Gurovich and colleagues are now working to show that the algorithm can be applied more broadly. A preprint that the team published late last year on bioRxiv describes the application of the approach to “679 individuals with 105 different monogenic disorders” and reports a top-10 accuracy of 99 percent.
FDNA is also working to improve the diversity of its dataset, as others’ research has shown that the app is more effective at identifying disorders in Caucasian individuals than in Africans—unless training images showing African patients are specifically added. “We know this problem needs to be addressed,” Gurovich tells Nature, “and as we move forward we’re able to have less and less bias.”