Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Diagnostic radiologists are medical doctors who use images to detect and characterize disease. They have become indispensable in modern medicine, often holding the key to diagnosis, prognosis, and management. As a doctor who relies heavily on medical imaging in traumatic and emergent cases, I know first-hand the value of a radiologist’s keen, expert eye.

AI currently outperforms humans in a number of visual tasks including face recognition, lip reading, and visual reasoning. And now, it seems, we can add radiology to the list.

Despite this importance, limitations of modern radiology coupled with dizzying advances in AI are converging to drive automation...

Simultaneously, image analysis, which constitutes the core function in radiology, is the area of AI research witness to the greatest gains. Computer vision is central to many of the most highly anticipated emerging technologies, from driverless cars to augmented reality. It is unsurprising then that a number of well-heeled stakeholders have invested heavily in its development over the past decades. What’s more, the machine-learning architecture best suited for computer vision, called deep neural networks, is the same architecture that underpins advances in AI more generally. Therefore, it’s been possible to co-opt breakthroughs in other AI domains, such as speech recognition and natural language processing, to further advance the state of the art in computer vision.

As a result, AI currently outperforms humans in a number of visual tasks including face recognition, lip reading, and visual reasoning. And now, it seems, we can add radiology to the list.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling. 

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others. 

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

That the technology is coming seems all but certain. Of course, a number of challenges remain, some technical, others around liability, regulation, and industry resistance. However, these are challenges shared with other automation technologies (think: driverless cars) that we’ve become increasingly adept at addressing. 

White-collar labor is far from sheltered against the looming AI storm. We doctors, in particular, have enjoyed an unwarranted sense of surety. Although a number of medical fields may be poised for automation, radiology seems first in line. The rollout of this technology is likely to be incremental, even surreptitious. Patients and their loved ones will benefit as will the state purse and the macroeconomy. Radiologists, however, both actual and aspiring, should anticipate these changes and plan accordingly.

Mutaz Musa is a physician in the Department of Emergency Medicine at New York Presbyterian Hospital/Weill Cornell, a healthcare consultant, and software developer in New York City.

Interested in reading more?

The Scientist ARCHIVES

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!
Already a member?