IMAGE COURTESY OF DR. FINKBEINER, GLADSTONES INSTITUTES AND UCSFMicrographs of fluorescently labeled cells are undoubtedly beautiful, but they require invasive and sometimes disruptive or deadly protocols to get their glow. To avoid such perturbations, researchers have developed a computer program that can distinguish between cell types and identify subcellular structures, among other features—all without the fluorescent probes our human eyes rely on.
“This approach has the potential to revolutionize biomedical research,” Margaret Sutherland, program director at the National Institute of Neurological Disorders and Stroke, which partially funded the work, says in a statement.
The researchers, who published their work in Cell today (April 12), designed their a neural network, a program modeled after the brain, using an approach called deep learning, which uses data to recognize patterns, form rules, and apply those rules to...
With high-quality images, the program correctly identified nuclei within cells just about perfectly. It could also distinguish dead from living cells, spot neurons within groups of cells that included astrocytes and immature dividing cells, and even tell a dendrite from an axon.
“Techniques like this tend to have a democratizing effect,” opening up opportunities for smaller groups, Molly Maleckar, the director of mathematical modeling at the Allen Institute for Cell Science who was not involved in the study, tells Wired.
The authors of the study say future work will look to optimize the network and boost its performance on certain tasks where it was less robust, such as picking out neuronal subtypes and finding axons in high-density cultures.