Artificial intelligence is changing how researchers examine the microscopic biological world. At the same time, machine learning approaches are being applied to images on greater scales. From snapshots of the brain and other organs to satellite images of Earth’s surface, intelligent computer programs can spot trends or features of complex systems that escape visual detection by experts.
Camera Traps
Aided by cameras with shutters triggered by motion, animal researchers can keep an eye on their field sites even from far away. But such camera traps snap away at anything that passes by, and it still takes a lot of human effort to slog through photos to identify the animals and make note of what they’re doing. (See “Streakers, Poopers, and Performers: The Wilder Side of Wildlife Cameras,”...
A tool developed last year by researchers at the University of Wyoming showed that AI—along with tens of thousands of volunteers—can help with this task. In a project called Snapshot Serengeti, citizen scientists labeled 3.2 million pictures—tagging them with information such as the species present, numbers of individuals captured in the shots, and the animals’ behavior. The researchers then developed a convolutional neural network, a deep learning program that mimics the way the brain makes connections, and showed it the images that had been annotated by online volunteers. After training, the model correctly identified roughly 94 percent of the images (PNAS, 115:E5716–25, 2018).
The model can process the majority of the images, matching the accuracy of human assessments, and hand the tough ones off to experts, says Mohammad (Arash) Norouzzadeh, the first author of the group’s work and a PhD student at the University of Wyoming. Of course, some images are tricky even for people, he adds. For instance, when an animal is moving, is too close to or too far from the camera, or is partly out of frame, the picture may not receive enough labels to be used for training. The next step is to develop more-advanced algorithms that can extract information from camera trap projects with less training, Norouzzadeh says.
Plant Spies
Instead of watering or applying pesticides to an entire field, farmers may be able to be more selective, thanks to AI’s ability to spot plants in need. Scientists and engineers from the Spanish National Research Council (CSIC) in Córdoba, Spain, deploy drones equipped with cameras to map areas of land and snap pictures that can be mined for information using machine learning and object-based image analysis, a method for grouping pixels into objects.
The team trained the model by providing information about how much water plants had received along with a set of pictures of the plants. In preliminary tests presented at the SPIE Commercial + Scientific Sensing and Imaging meeting in April 2018, the algorithm could discriminate between images of well-hydrated versus water-stressed hydrangea and butterfly bush (doi:10.1117/12.2304739, 2018). These are “differences that are not obvious in the field,” says Jose Peña, one of the CSIC researchers.
The team is also turning to AI to locate tiny weeds lurking in the field, allowing farmers to target small areas with herbicides. For this application, the researchers employed a machine learning approach called Random Forest that learns from photos of the field. To differentiate between crops and weeds, the model uses context clues such as size or the placement of the plant in or outside of a row, achieving nearly 90 percent accuracy in one patch of sunflowers (Remote Sens, 10:285, 2018).
The researchers are working to optimize the algorithms, hoping to create AI-enabled “smart sprayers” that are more efficient at detecting and spraying weeds as they image rows of crops than ones currently on the market.
Forecasting Wildfires
Satellite cameras capture wildfires blazing across the land. With these pictures in hand, researchers at the University of Waterloo in Ontario, Canada, are using an AI strategy called reinforcement learning to create models that predict how the fires spread. In an iterative process using images from previous fires, the model receives images showing a fire’s location every 16 days, the length of time between satellite pictures. The model then predicts the next 16 days’ spread and receives feedback about the accuracy of its prediction, improving the model’s understanding of how fires move. As it progresses, the model learns “rules” that wildfires follow—for example, that fire stops when it meets a lake (Front ICT, 5:6, 2018).
The researchers found that their model holds up well against others developed by machine learning and against physics-based models. Tackling wildfire spread by applying AI to real world datasets presents a challenge because the data are noisy. “The real world has a lot of complications that we don’t anticipate,” says study coauthor Mark Crowley, a computer scientist at Waterloo. “It will push us to find better algorithms or software.”
Other problems that reinforcement learning could study in the environmental realm involve the spread of infectious disease and climate, says Crowley. Researchers have already applied other machine learning techniques to predict flooding (Neural Comput Appl, 27:1129–41, 2016) and drought (Geomat Nat Haz Risk, 8:1080–102, 2017), and computer scientists are continually working to make their tools more powerful.
Brain Scans
Scientists use three-dimensional MRI scans to track how a brain showing signs of Alzheimer’s disease changes over time. With the goal of improving Alzheimer’s disease diagnoses, researchers from Columbia University and Carnegie Mellon University trained a convolutional neural network to mine those scans. After being shown brain scans from patients with the disease and from healthy controls, the model could differentiate between diseased and healthy brains in scans it hadn’t previously seen with 93 percent accuracy (bioRxiv, doi:10.1101/456277, 2018). Another model that analyzed 2-D slices extracted from the 3-D scans also had good success, indicating that patients could someday receive a diagnosis from a shorter scan that images less of the brain, according to the researchers.
“A lot of neurologic and psychiatric conditions are . . . localized to certain areas of the brain,” says Frank Provenzano, one of the study’s authors. In addition to improving Alzheimer’s diagnoses, deep learning approaches could shed light on which regions of the brain drive the disease. Alzheimer’s shrinks the brain, causing changes to several parts of the organ, so the scientists were surprised that their model identified one region—the hippocampal formation, which includes the hippocampus and nearby entorhinal cortex—as the major contributor to its prediction of Alzheimer’s.
Researchers are using similar AI approaches in other health-care contexts, including assessing MRI images of infants’ brains to assess the risk of autism (Nature, 542:348–51, 2017), analyzing tumors in the liver (Radiology: Artificial Intelligence, 1:e180019, 2019), and scanning the eye to probe retinal disease (Brit J Ophthalmol, 103:167–75, 2019).