For cell biologists, fluorescence microscopy is an invaluable tool. Fusing dyes to antibodies or inserting genes coding for fluorescent proteins into the DNA of living cells can help scientists pick out the location of organelles, cytoskeletal elements, and other subcellular structures from otherwise impenetrable microscopy images. But this technique has its drawbacks. There are limits to the number of fluorescent tags that can be introduced into a cell, and side effects such as phototoxicity—damage caused by repeated exposure to light—can hinder researchers’ ability to conduct live cell imaging.
These issues were on biomedical engineer Greg Johnson’s mind when he joined the Allen Institute for Cell Science in Seattle in 2016. Johnson, whose doctoral work at Carnegie Mellon University had focused on creating computational tools to model cellular structures (see “Robert Murphy Bets Self Driving Instruments Will Crack Biology's Mysteries” here), was...
Seeing this work in a movie, in a live cell, in 3D, was really jaw dropping.
—Rick Horwitz, Allen Institute for Cell Science
“Because of technological limitations, we can only see a few things in the cells at once,” Johnson says. “So we wanted to figure out ways that we could, at the very least, predict the organization of many more structures from the data that we already have.”
Specifically, they wanted to develop a method to identify a living cell’s components in images taken using brightfield microscopy. This technique is simpler and cheaper than fluorescent microscopy, but has a major disadvantage—it produces images that appear only in shades of gray, making a cell’s internal structures difficult to decipher. So the scientists decided to create a computer algorithm that could combine the benefits of both methods by learning how to detect and tag cellular structures the way fluorescent labels can, but in brightfield images instead.
To do this, the team turned to deep learning, an artificial intelligence (AI) approach where algorithms learn to identify patterns in datasets. They trained convolutional neural networks—a deep learning approach typically used to analyze and classify images—to identify similarities between brightfield and fluorescence microscopy images of several cellular components, including the nuclear envelope, cell membrane, and mitochondria.
After comparing many pairs of images, the algorithm was able to predict the location of structures that fluorescent labels would have tagged, but in 3-D brightfield images of live cells (Nat Methods, 15:917–20, 2018). The researchers found that the tool was very accurate: its predicted labels were highly correlated with the actual fluorescent labels for many cellular components, including nucleoli, nuclear envelopes, and microtubules. By applying the technique to a series of brightfield images and merging the outputs, “we [were able to get] this beautiful time-lapse of all these cell parts moving around and interacting with each other,” Johnson tells The Scientist.
“Seeing this work in a movie, in a live cell, in 3D, was really jaw dropping,” says Rick Horwitz, executive director of the Allen Institute for Cell Science, who wasn’t directly involved in the project. “It was really a bit like magic.”
Laura Boucheron, an electrical engineer at New Mexico State University who was not involved in the work but coauthored an accompanying perspective article piece in the same issue of Nature Methods, tells The Scientist that the results were “shockingly impressive.” She adds that the images generated by the algorithm are “remarkably similar” to those produced using fluorescence microscopy. “The brightfield images are, to a human, visually not particularly interesting. They are not as clear—in terms of the structures present—as fluorescent images,” Boucheron says. “But based on the results, clearly there is information [in the brightfield images] that the network is learning to interpret.”
Johnson notes that a big upside to his team’s method is that, contrary to the common belief that deep learning algorithms require thousands of images to learn, this tool could be trained with just dozens. “This is something that a graduate student can gather in an afternoon,” he adds. The researchers were also able to use their deep learning algorithm to identify the location of proteins that make up myelin—the protective sheath around neurons—in 2-D electron microscope images.
The cool thing about this technology is that it can be applied so broadly.
—Steve Finkbeiner, Gladstone Institutes
Still, the method has some limitations. According to Johnson, one key issue is that the technique does not work on all cellular structures, because some simply do not appear in images taken with certain forms of microscopy. In their recent study, for example, the algorithm had difficulty identifying a few structures in brightfield images, including Golgi apparatuses and desmosomes, junctions that hold cells together. Another limitation is that, while the tool requires a relatively small training set, a model trained on images from one microscope might not work on images gathered from another.
The team is now investigating some potential applications of the technique. Horwitz suggests that, in addition to being able to make imaging studies faster and cheaper, the tool could eventually be applied in pathology to help identify sick cells or to rapidly identify how cellular structures change in diseased states.
Another group, which included Steve Finkbeiner, a neuroscientist at the Gladstone Institutes and the University of California, San Francisco, and his colleagues at Google, developed a similar AI-based cell-labeling technique last year (Cell, 173:P792–803.e19). “I think the cool thing about this technology is that it can be applied so broadly, and I don’t think we have a great feel yet for what the limits are,” Finkbeiner says.
Boucheron notes that she is currently investigating applications of these AI-based approaches to image analysis in both biology and astronomy—another field where researchers rely on a variety of instruments to capture and analyze natural phenomena. “I work with astronomy data quite a bit, and particularly with solar images,” she explains. “I’ve been looking for several years for ways to kind of translate between some of the different [instruments] that are used to image the Sun.”
Techniques that apply deep learning to image analysis could be useful wherever a microscope or telescope is used, Horwitz says. This latest study is “just the tip of the iceberg.”