Machines can now be trained to see things humans cannot, and likely never will. Researchers have recently demonstrated this principle in applications involving a wide range of biomedical imaging. From obviating the need to stain pathology slides, to finding rare cells without cytometry, to characterizing skin lesions, retinal scans, chest X-rays, brain CT scans, heart MRIs, and much more, AI stands to change the way we do medicine (See “Artificial Intelligence Sees More in Microscopy than Humans Do,” The Scientist, May 2019).
This advance relies on deep neural networks, systems of artificial neurons that can accurately and rapidly detect complex patterns. It’s an approach to artificial intelligence (AI) that has gathered remarkable momentum since it was introduced about a decade ago, and today chiefly relies on supervised learning—that is, the ground truths of accurately labelled images that are used to train the network. And so-called deep learning is proving its utility for more than just image analysis; speech and text are also well-suited inputs. These different types of structured information can be ingested by a deep neural net with an insatiable appetite for data.
That’s so antithetical to us humans, who struggle with data overload. While in the years ahead researchers need to conduct much more validation of AI as applied to biomedicine, especially in clinical environments, it is clear that a new model of man and machine medicine is emerging.
The title of my latest book Deep Medicine has several layers of meaning. In the first, “deep” refers to deep phenotyping, or the markedly enhanced understanding of a human being’s medical essence by integrating all of their data: biologic, demographic, anatomic, physiologic, environmental, and so on. We need machines to help us achieve that goal. Deep learning is the book title’s second layer of meaning, referring to the need to tweak the predictive power of the approach and improve its productivity, speed, accuracy, and workflow. Finally, these advances open the way for deep empathy, a sentiment that will hopefully reinvigorate the doctor-patient relationship when data analysis and pattern recognition have been sufficiently outsourced to machines and patients have taken on more responsibility for their own health and healing.
The implications for the symbiosis between human doctors and machines are striking. The field of ophthalmology, for example, has been especially eager to leverage AI, which has proven capable of diagnosing diabetic retinopathy without the need for a human doctor, much less a specialist. That’s important because presently half of patients with diabetes don’t get screened for retino- pathy, and it is a major preventable cause of blindness. Moorfields Eye Hospital, in collaboration with Deep Mind in the UK, demonstrated that urgent triage for more than 50 eye conditions could be achieved by deep neural networks without a single mistake. Similarly, during colonoscopy screening, physicians often miss many small polyps, which are just as likely to be precancerous as the larger ones. Researchers recently published results from a randomized trial that showed real-time machine image analysis markedly improved doctors’ detection of such features. Overall, the holes in our current medical practice—the mistakes, the misses, the lack of accuracy—might be mended with deep learning algorithms.
Beyond imaging, AI can collaborate with human clinicians via patient speech. Conversations between patients and doctors can be AI transcribed via natural language processing and machine learning into a synthetic note that transcends the typical quality of notes that appear in electronic health records. AI note taking has already started at select health care centers in the UK and in China, and the approach is being trialed in the US. This system has the potential to restore eye contact, a key step in bringing back presence, trust, and empathy to the patient-doctor relationship.
Outside of formal health care settings, automation of common diagnoses is also taking hold. The ability to use AI with algorithmic support for self-diagnosis of heart rhythm disturbances, urinary tract infections, and pediatric ear infections have all been validated to some degree and are progressing to wide-scale use. Ultimately, patients may be able to use a virtual medical coach that will take in all of a person’s data on a continuous and seamless basis to help prevent or better manage chronic conditions.
We are still in the early days of AI in medicine. It’s very long on promise, but short on clinical validation. To achieve the progress that is within our grasp, it will be vital to perform rigorous, prospective clinical trials. We may not see another opportunity like this for generations to come: the chance to improve accuracy and precision, lower cost, and enhance humanity in medicine.
Eric Topol is the founder and director of the Scripps Research Translational Institute, Professor of Molecular Medicine, cardiologist, and author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again (Basic Books, 2019). Read an excerpt of Deep Medicine.