Mutaz Musa’s recent opinion piece in The Scientist predicts that AI will replace radiologists as interpreters of medical images. As a radiologist for the past 38 years, I disagree. It is impossible for me to argue that AI software will never (as in, within millennia) be able to perform at a human level. Instead, I will argue a much more tightly defined set of premises: A) Our function is far more than simply recognizing white spots, something that does not appear to be understood by those predicting our replacement, and B) such predictions being aggressively promoted can be harmful to the health of both patients and the healthcare system.

Premise one: Most think that seeing spots is our work. Actually, knowing what spots are and what they mean in a specific patient is the true work of radiologists.

To illustrate, consider this image:

I sent...

It does not do a particularly good job of recognizing the objects in the photo. More importantly, a human perceives in a fraction of a second that there is far more information in the image than the keywords Google spits out. My mind begins to construct a story; I see the meaning. I also immediately know the picture was staged, and can guess why it was staged. The AI engine doesn’t begin to approach this complexity. In just this way, experienced radiologists synthesize the objects in a scan into a coherent, meaningful interpretation: the patient’s story.

The current speculation about AI replacing radiologists focuses exclusively on recognition of objects in an image. As such, they are analogous to spell checkers when writing. While spell checkers have been available for several decades, all of us no of their shortcomings. Context eludes them.

Lesson: Image recognition is far short of image interpretation. All of the current marketing noise about AI replacing radiologists is about recognition, none about interpretation.

See “Opinion: Rise of the Robot Radiologists”

Premise 2: New technologies often benefit from “credulous acceptance:” a technology may be so intoxicating that skepticism is abandoned and it is enthusiastically, prematurely, and expensively embraced.

One example: Breast Computer-Aided Diagnosis (CAD). This was the AI of the early 2000s. The software processes mammograms and marks areas of suspicion. The initial reports indicated an average of about a 10 percent increase in cancer detection rate over a radiologist’s reading. That translates to about one additional cancer found among 2,000 patients screened. The software I use daily is set to find 1.80 suspicious areas per case. Therefore, I must sort through 3,600 marks to find the one that I might have missed. That’s an awful lot of false positives. This is what happens when theory collides with reality. Worse, there has never been any proof that this additional detection would save even one life.  

However, because software companies aggressively marketed this technology, mammography facilities [in the U.S.] had to purchase a $179,000 add-on to their equipment to avoid appearing less than state-of-the-art. Between 2001 and 2006, the percentage of patients who had this technology applied to their mammograms increased from 3.6 percent to 60.5 percent. Then, in 2015, a definitive paper appeared reviewing CAD results from 323,973 patients. The final answer: it is useless. The authors were unable to reproduce even the minimal 10 percent increase in sensitivity that won the software approval from the US Food and Drug Administration (FDA). And we all have expensive white elephants in our centers.

We must be very wary of the queue of entrepreneurs impatient to reap profits from new technologies. Credulous acceptance of the bright, shiny penny of new technology is their bread and butter. Musa makes this point nicely, by noting Enlitic. He writes that the company’s services implicitly deny that they interpret images, calling themselves “support products.” He speculates that this reflects a “reserved view of the technology.”


Those with a “reserved view of the technology” would not be promoting software that hasn’t had extensive testing in a clinical setting. This language does serve a purpose, though. It avoids any responsibility—specifically, legal—for the results. This may remind you of the phrase: “this product is not intended to diagnose or treat any condition.”

Radiologists are accustomed to technical disruption in our field. Most of what we do did not exist 40 years ago. CT, MRI, and PET have replaced older, less-accurate methods. This transition has been guided by radiologists, working with engineers and manufacturers. Now, AI is the new technology. It is no different. Our professional organization, the American College of Radiology, is proactively engaged with the FDA and manufacturers to construct frameworks for testing software of this type to clearly understand what place such software could take in the care of patients.

Radiologists should, and will, guide the deployment of advanced software tools. This should not be done by entrepreneurs who have no experience applying new techologies to patient care. We should avoid credulous acceptance of AI for medical imaging and demand extensive proof before unleashing it on the public.

Phil Shaffer is a radiologist at Riverside Radiology and Interventional Associates in Columbus, Ohio.

Interested in reading more?

The Scientist ARCHIVES

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!
Already a member?