AI Networks Generate Super-Resolution from Basic Microscopy
AI Networks Generate Super-Resolution from Basic Microscopy

AI Networks Generate Super-Resolution from Basic Microscopy

A new study uses deep learning to improve the resolution of biological images, but elicits skepticism about its ability to enhance snapshots of sample types that it has never seen before.

Dec 17, 2018
Jef Akst

ABOVE: A deep neural network enabled the conversion of confocal images of HeLa cell nuclei (left) to super-resolution images (middle) comparable to those achieved using the super-resolution imaging technology known as stimulated emission depletion (right).
OZCAN LAB AT UCLA

Using a type of artificial intelligence, scientists have turned lower-resolution micrographs of cells into high-quality images of the sort typically achieved using super-resolution technologies. The approach, published today (December 17) in Nature Methods, could put super-resolution microscopy in the hands of a far greater number of labs, by making it possible to achieve such high-quality images from standard benchtop microscopes, coauthor Aydogan Ozcan of the University of California, Los Angeles tells The Scientist. “[Super-resolution approaches] are really limited to resource-rich environments in terms of both equipment and expertise. Now, through AI, we’re changing the game.”

Over the past year or two, researchers in the field of microscopy have tinkered with AI techniques to improve the process of acquiring images, as well as their quality. A handful of studies have achieved super-resolution imagery from lower-quality starting images. 

In this latest work, Ozcan and his colleagues trained so-called deep neural networks, computer models that learn nonlinear relationships between pairs of input data, to transform confocal and fluorescence microscopy images into high-quality pictures like those generated by stimulated emission depletion (STED) and structured-illumination microscopy (SIM), respectively. Comparisons with images acquired via the super-resolution techniques revealed that the neural networks are “not hallucinating,” Ozcan says. “It’s really showing the super-resolution features embedded in the object.”

“I think it’s solid,” says Samuel Yang, a research scientist at Google. “If the appearance of the [structure of interest] is not going to change very much, I think it’s perfectly valid to use this technique.”

Ozcan and colleagues also trained a model to improve the resolution of bovine pulmonary artery endothelial cells taken with a 10x objective (left) into images (middle) comparable with those taken with a 20x zoom (right). Panels d–f show a digital zoom of the cell’s F-actin (magenta) and microtubules (green).
OZCAN LAB AT UCLA



The utility to biologists may hit some limitations. Peter Horvath, a computational cell biologist at the Biological Research Center of the Hungarian Academy of Sciences, says he thinks that deep neural networks could miss key nuances in the samples. “It copies content from another image that looks similar, but usually in research we’re looking to find something extraordinary or different from the others,” he says. “This is exactly where this method would fail because it would not preserve the differences.”

Some researchers have had success capturing such abnormalities with deep neural networks. Earlier this year, for example, Pasteur Institute computational biophysicist Christophe Zimmer and colleagues developed a network to reduce the number of frames—and thus time—needed to construct an image comparable with one derived from the super-resolution technique known as localization microscopy. Trained on microtubules, the networks had no problem accurately achieving super-resolution with images of abnormal microtubules, Zimmer says.

The ability of the model to generalize to structures different than it had been trained on had its limits, Zimmer warns. When tested on images of nuclear pores, “the output was essentially a gigantic artifact,” he says. The neural network is “trying to fit small filaments through these nuclear pores, so the image is full of filaments instead of these octagonal structures.”

Ozcan and his colleagues report that their networks can successfully make super-resolution versions of novel sample types. For example, a supplementary figure in their paper appears to illustrate how a model trained on images of actin microfilaments could accurately improve the resolution of images of mitochondria or blood vessels. “We have evidence that it’s generalizing this super-resolution concept on even new types of samples that it has not seen,” says Ozcan.

This claim drew skepticism from the experts The Scientist spoke with. “I think that’s stretching it a little bit,” says Yang.

Poor performance when generalizing to different types of images is a common shortcoming of this type of AI approach to improve resolution, notes Broad Institute computer scientist Allen Goodman. The problem is that the networks are often “way overtuned for whatever problem they were working on” in training, he says.

On the other end of the spectrum, a network that can accurately generate super-resolution images of novel sample types would truly democratize the technique, as the model would only have to be trained with true super-resolution images once, not repeatedly for each type of sample a research group wants to examine. 

Ozcan emphasizes that the message in his paper is that it’s always best to retrain the networks to the new sample type. “But generalization is already there,” he says. “How far you can push it—that’s the question that I think everybody has.”

H. Wang et al., “Deep learning enables cross-modality super-resolution in fluorescence microscopy,” Nat Methods, doi:10.1038/s41592-018-0239-0, 2018.