<p>INSIDE THE MIND OF A FLY:</p>

Courtesy of Rachel Joynes

Dorsoventral view of a thin section through the larval central nervous system of the fruit fly, stained in red for serotonin, and in green for dopaminergic and sertonergic neurons. The cell bodies are at the lower ventral part of the photos, and above the cell bodies is an extensive array of synaptic varicosities.

To the uninitiated, three-dimensional microscopy makes the pretty pictures of fluorescently labeled cells that grace the covers of scientific journals. But to today's microscopists, the capacity to render images from 3-D and 4-D datasets is critical for studying the distances between objects in a sample and for tracking how complex samples change over time. "It's hard to look at 2-D slices one at a time and then put that together in your mind in order to see the relationships among objects," says Dean Sequera, vice president of...

SHARPENING TOOLS

Fundamentally, 3-D imaging involves capturing stacks of 2-D images via optical or physical sectioning and rendering these stacks into a 3-D image using specialized software. Confocal microscopy is one of the most common imaging methods. This technique reduces out-of-focus haze but can be toxic to live cells, so some researchers use widefield or spinning-disk confocal microscopy instead, both of which are less toxic but also tend to yield blurry pictures.

Deconvolution, or image restoration, can help focus these datasets. While simple algorithms subtract light from the specimen to sharpen the image, more advanced iterative methods preserve the total fluorescence spread out through the sample and increase the contrast between the background and the features of interest. "Deconvolution collects all of that light and puts it back essentially where it came from. This makes your structures a lot brighter, and it actually makes the signal-to-noise ratio better, so you can get a lot more information and a lot more accurate information about your sample," explains David Biggs, senior research scientist, AutoQuant Imaging, Watervliet, NY.

Deconvolution still produces only a best guess, though the software incorporates quality criteria to ensure that this estimate is the best possible. "If you're devising an algorithm just to remove the fuzziness of the image, and you don't pay attention to making sure you do not remove other structures, then you might get out a picture that looks nice but is actually unfit to draw conclusions from in the biological sense," explains Hans van der Voort, founder of Scientific Volume Imaging (SVI), Hilversum, Netherlands. The company's Huygens deconvolution platform computes an estimated image based on an optical model of the microscope, and compares this with the actual recorded image. The software continues to change the estimate until it matches this image, says van der Voort.

Another alternative for removing out-of-focus blur is the Apo-Tome from Carl Zeiss of Jena, Germany. The device can transform a conventional fluorescence microscope into a structured illumination system with optical sectioning capability similar to that of a confocal microscope. In structured illumination, a patterned grid is projected onto the focal plane of the sample. The grid is moved in three predefined steps, and at each step the image is recorded. Combining the three images creates an optical section from which unfocused light is removed.

ON BEING TRENDY

Despite the rapid pace of software development, companies can find it difficult to keep up with emerging imaging trends. Alison North, director of the Rockefeller University Bio-Imaging Resource Center, ran into this problem when constructing a 3-D image of a live Xenopus tadpole, which at 8 mm long was too large to view all at once under a microscope.1 North collected 30 neighboring tiled stacks, 50 slices each, using a Zeiss confocal microscope. Lacking software for arranging tiles in three dimensions, North called Zeiss and asked the company to write a macro specific to her application. "A lot of [software development] comes about by getting back to the companies and saying, 'you can't do what we want to do,"' explains North.

Some companies respond to new imaging techniques by filling a niche, and many imaging software vendors currently are developing methods for resolving 3-D data sets over time. Coventry, England-based Improvision recently introduced software called Volocity specifically for this purpose. Barry Condron of the University of Virginia, Charlottesville, uses the software to study synapse formation in the central nervous system, a complex environment that changes rapidly over time. "When we looked around, Volocity was the only one that could do all this complex 3-D analysis, wasn't platform-specific, and evolved very rapidly," Condron notes.

MANAGING THE DATA

<p>3-D INSIGHT:</p>

Courtesy of James Evans

Isosurface reconstruction of the leading edge of an IC-21 macrophage showing actin in podosomes (red) and integrins (green). Colocalization of the two signals (bottom panel) is shown in yellow and occurs only within podosomes at the leading edge. DAPI is shown in blue. Grid squares = 2 microns.

Because software companies generally respond to researchers' needs, most current imaging limitations occur downstream of acquisition, explains James G. Evans, research scientist at the Computational and Systems Biology Initiatve (CSBi) at the Massachusetts Institute of Technology. For example, image data management and analysis are computationally intensive and often require facilities to design their own solutions.

Evans and colleagues are developing an imaging pipeline to collect data directly from microscopes and shuttle this data at high speeds into a database for storage and management, where the data can then be sent for image restoration by deconvolution. To this end, Evans says he has had a lot of success with SVI's Huygens software. "It's very well parallelized, working across many processors, and is almost platform-independent," Evans says.

Another problem is the lack of a standardized data format among microscope vendors. "Often they have two, three, or four internal proprietary formats, which makes it extremely hard to exchange data among researchers using different equipment," says Marius Messerli, CEO of Zurich-based Bitplane. Researchers doing specialized work often have to cobble together hardware from different vendors, which compounds the problem. "When you're trying to do 3-D reconstructions, you could spend half your time having to plug in parameters and work them out, because the softwares aren't really talking to each other properly," explains North.

Bitplane's Imaris software solves this dilemma by incorporating file readers for the major microscope vendors, but this goes only so far, says Evans. Microscope file formats generally indicate only how the microscope was configured, but other parameters such as cell type, construct design, and drug concentrations can still be difficult to share between laboratories.

That's where the Open Microscopy Environment (OME) project comes in. Developed by Peter Sorger of MIT and Jason Swedlow of the University of Dundee, Scotland, the OME's goal is to develop a standardized language for storing and sharing image data. The consortium has created an XML schema listing approximately 150 different image parameters, Evans notes. "Without ever talking to the guy who collected the data, you can find out a lot of information as to how that data was acquired," he says.

ADVANCES IN MULTIPHOTON MICROSCOPY

Imaging advances are not limited to software developers; scientists in academic laboratories have been improving imaging tools and hardware, too. These technologies enable investigators to probe deep within tissues and examine individual cellular substructures.

Scientists at Cornell University and the University of Rennes, France, used a variation of two-photon microscopy to capture nerve cell signaling events in sea slug neurons at submilli-second resolution.2 The technique, called second-harmonic generation microscopy, is related to two-photon microscopy in that it involves the simultaneous interaction of two photons with a molecule. However, rather than being absorbed as in two-photon microscopy, the photons scatter nonlinearly off of arrays of non-inversion-symmetric fluorescent molecules and are converted to photons at half the wavelength (or twice the energy) of the original two.

The second-harmonic signal arises only from asymmetric dye molecules that are aligned in the same direction (as in a cell membrane). This limits the signal to the area where the dye molecules sense the membrane potential, increasing signal-to-noise ratios, explains lead author Daniel Dombeck, a graduate student in the lab of Cornell's Watt Webb, who codeveloped two-photon microscopy.

The dye molecules, developed by Mireille Blanchard-Desce of the University of Rennes, also could allow high-resolution imaging of signaling events in thick brain slices, which cannot be probed with existing methods. "Second-harmonic generation right now is the only technique that is capable of imaging these fast events with as high resolution deep in tissue," says Dombeck.

Webb and Dombeck caution that second-harmonic generation microscopy needs further refinement. At present, for example, the imaging speed is one frame per second, but action potentials occur on a millisecond time scale. Webb's group is developing faster imaging systems.

<p>THE POWER OF DECONVOLUTION:</p>

Courtesy of James Evans

A macrophage, fluorescently stained for tubulin (yellow/ green), actin (red) and DNA (DAPI, blue), was imaged using a widefield microscope, deconvolved with Huygens Professional, and visualized by FluVR's spectral fluorescence volume renderer. At left, original data; at right, the clarified image.

Faster multiphoton imaging can also aid in multidimensional whole-organ imaging, explains MIT's Peter So, whose laboratory develops instrumentation for high-speed automated imaging of deep tissue. The typical multi-photon microscope setup can capture images at one frame per second, So explains. At this speed, imaging a cubic centimeter of tissue at one-micron resolution would take 25 million seconds, or about 9.5 months. "Instead of doing that, we developed a very high-speed, automated platform that allows us to do this today on the order of two days," So says. A second-generation automated microtome system, now in development, will be able to accomplish the same feat in four hours. So plans to use this technology to map every cell in an organ and link it to genomics and proteomics data.

ENHANCING THE RESOLUTION

More than a century ago, Ernst Abbe observed that resolution, or the ability to distinguish between closely spaced objects, is limited by the diffraction of light. Light hitting an object will always have a finite size, which in the case of modern light microscopes is 200 nm in the x-y focal plane and 600 nm in the z-direction. Smaller objects cannot be resolved via light microscopy, or so scientists originally thought.

Several new techniques challenge this conventional wisdom. A number of microscopy laboratories now use a more advanced form of structured illumination (the technology behind the Zeiss ApoTome). This technique applies a fine pattern in several directions and employs more complex frequency-space processing to provide optical sectioning and double the resolution in both the x-y and z directions.

Other methods push the resolution envelope even further. In the late 1980s, Stefan Hell of the Max Planck Institute for Biophysical Chemistry, Göttingen, Germany, developed the 4Pi method, which reduces the focal spot size along the z-axis by a factor of seven. "In principle our methods have the potential to come up with a resolution of a few nanometers, virtually as good as electron microscopy," says Hell.

Hell observed that, in a standard microscope, spatial resolution is nonuniform: The cone-shaped wavefront makes the focal spot along the z-axis longer than that in the x-y direction. A spherical wavefront, on the other hand, would generate a uniform focal spot, increasing resolution in the z-direction. Hell devised a confocal microscope, the 4Pi, in which light from two opposing objective lenses combines to create a spherical focal angle. The device works with both confocal and multiphoton microscopes and can analyze live cells.

Mats Gustafsson at the University of California, San Francisco, developed a related widefield imaging technique, called I5M.3 Like 4Pi, I5M exploits interference between the light passing through the two objectives, but it does so over the whole image at once instead of at one scan point at a time. Both methods improve resolution along the z-axis by a factor of seven. Gustafsson has combined I5M with laterally structured illumination to improve resolution in the x-y direction as well.

Hell's latest effort, stimulated emission depletion (STED) microscopy, uses a different principle to limit the focal spot: selectively inhibiting fluorescence. In this method, a sample is irradiated by an excitation pulse that is immediately followed by a donut-shaped depletion, or STED, beam.4 The pulse quenches excitation at the spot's periphery, leading to a smaller fluorescent area and thus increasing resolution.

Hell has demonstrated that increasing the intensity of the STED pulse leads to an exponential decrease in fluorescence. "In principle, you can create a focal spot that is infinitely small," says Hell. By combining STED and 4Pi, Hell has achieved resolutions as low as 30 nm.

Though 3-D imaging has become routine, one thing is clear: It is far from stagnant. Advancements in live-cell and 4-D imaging, as well as improvements in image acquisition speed and resolution, are pushing the limits of what biologists can see under the microscope. Says Bitplane's Messerli, "Ten years ago biologists would dream about these methods, and now they are available."

Aileen Constans can be contacted at aconstans@the-scientist.com.

Selected Imaging Software Companies

Able Software http://www.ablesw.comAlpha-Tec http://www.alphatecltd.comApplied Precision http://www.appliedprecision.comAtto Bioscience http://www.atto.comAutoQuant http://www.aqi.comBio-Rad Laboratories http://www.bio-rad.comBitplane http://www.bitplane.comSoft Imaging System http://www.soft-imaging.comSyncroscopy http://www.syncroscopy.comTGS http://www.tgs.comTILL Photonics http://www.till-photonics.comUniversal Imaging Corp. http://www.image1.comVayTek http://www.vaytek.comCarl Zeiss http://www.zeiss.com/microISee Imaging Systems http://www.iseeimaging.comKinetic Imaging http://www.kineticimaging.comMedia Cybernetics http://www.mediacy.comScanalytics http://www.scanalytics.comScientific Volume Imaging http://www.svi.nlScienceGL http://www.sciencegl.comScion http://www.scioncorp.comCompix Inc., Imaging Systems http://www.cimaging.netEMPIX http://www.empix.comIatia http://www.iatia.com.auImprovision http://www.improvision.comIndeed Visual Concepts http://www.indeed3d.comIntelligent Imaging Innovations http://www.intelligent-imaging.com

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!