SGI Advances High-Performance Computing, Collaborative Research

Image: Courtesy of the Sci Institute, NLM, and Theoretical Biophysics Group of the Beckman Institute at UIUC THE MIND'S EYE: A researcher maps the human brain using a large-scale visualization theater. Imagine standing in a room, with a three-dimensional HIV-1 protease floating before your eyes. As big as a boulder, the enzyme's craggy surface seems so close you can almost touch it. But put out your hands, and you'll touch naught but air. Welcome to the Delaware Biotechnology Institute'

Nov 25, 2002
Jeffrey Perkel
Image: Courtesy of the Sci Institute, NLM, and Theoretical Biophysics Group of the Beckman Institute at UIUC
 THE MIND'S EYE: A researcher maps the human brain using a large-scale visualization theater.

Imagine standing in a room, with a three-dimensional HIV-1 protease floating before your eyes. As big as a boulder, the enzyme's craggy surface seems so close you can almost touch it. But put out your hands, and you'll touch naught but air.

Welcome to the Delaware Biotechnology Institute's (DBI) Visualization Studio. In a darkened room on the campus of the University of Delaware, Newark, research assistant Praveen Thiagarajan is showing off the DBI's newest core facility. The Studio's most obvious feature is a 100-square-foot display with a resolution of nearly 2,500 x 1,024 pixels. But behind the scenes--both literally and figuratively--lies the system's heart: a Silicon Graphics (SGI) Reality Center®, powered by a six-processor SGI Onyx 3200 supercomputer with two graphics pipes driving a pair of rear-mounted projectors.

When running standard software, the screen behaves like any computer monitor--only much, much bigger. But when Thiagarajan executes software specifically written for the Visualization Studio, the image on the screen becomes blurry as the computer displays stereoscopic images, alternating between two slightly different pictures 60 times a second. The differences between the two images reflect the fact that the left and right eyes perceive marginal differences when looking at the same thing. To make the display come back into focus, just put on a special pair of glasses, which alternates the polarity of the left and right lenses in sync with the display to produce a three-dimensional image. Anyone who's ever seen a 3-D IMAX movie has experienced the same effect.

But DBI's Visualization Studio is not passive: The room is equipped with a sort of internal global positioning system. As one person--the simulation's "driver"--moves about the room, sensors track his location and the computer adjusts the picture accordingly. As a result, the system is "immersive," and you can walk into, under, and around the molecule in front of your eyes.

Such computational wizardry facilitates collaborative research, says Eng Lim Goh, SGI's chief technology officer and senior vice president. Goh was one of two keynote speakers at a symposium, jointly sponsored by SGI and the DBI, held in early October.

SYSTEM ARCHITECTURE One problem Goh touched on was system scalability. Computer manufacturers typically build massive supercomputers by clustering smaller ones. The world's most powerful computer, Japan's Earth Simulator (ES),1 for instance, contains phenomenal computing power: 5,120 processors and 10 terabytes (10,000 gigabytes) of memory. These resources are fragmented across 640 nodes, each of which contains eight processors and 16 gigabytes of RAM. As a result, the ES's programmers must break computational problems into smaller pieces, each of which is tackled by one node.

Some problems lend themselves to this architecture very nicely. To render a digital movie, for instance, the main drawing program asks each node to draw one frame and then send it back to the main node, where the images will be ordered into correct sequence to make the movie. Because each frame is independent--that is, because the drawing algorithm does not need to collect data from the previous frame, nor share its data with the next frame--the problem requires no node-to-node communication. This type of problem is known as "embarrassingly parallel."

Modeling protein folding, on the other hand, is anything but embarrassing: Each computational step in the simulation depends upon what came before, and affects what comes next. More importantly, what happens to atoms on the left affects atoms on the right. Try modeling the folding of a 50,000-atom protein on a 1,000-node computer, says Jim Taft, a contract physicist at NASA's Ames Research Center (ARC) in Moffett Field, Calif. Each node gets a paltry 50 atoms to work with, and so can finish its slice of the problem in a few microseconds. But then the computer must determine how each parcel of 50 atoms affects neighboring parcels. This phase of the calculation requires constant communication, and though each communiqué takes only about five microseconds, says Taft, the problem quickly becomes intractable, with communication costs exceeding computing costs. This problem is simply impossible to do on a large cluster; very fast access to remote information is required.

SGI is therefore designing "monolithic" computers, called single-system image (SSI) machines. The company recently built one such system for the ARC, containing 1,024 processors and 256 gigabytes of RAM on one node, with a total theoretical system performance of 1.2 teraflops (trillion floating-point operations per second). This computer, the largest SSI computer ever constructed, cost a mere $15 million. The US Department of Energy's Accelerated Strategic Computing Initiative (ASCI) program, in contrast, recently spent about $200 million for a 30-teraflop cluster called Q.

Yet Taft, a major user of the Ames system, says SSI instruments deliver better actual performance--as opposed to theoretical peak performance--than do clustered machines. Several major ASCI codes, he says, routinely execute at only 2-3% of peak performance on the clustered computers, whereas NASA often sees close to 20% of peak on its SSI machine. "Buying 10 times less iron is a nice way to go," he observes. "It also takes a lot less floor space." The ASCI Q occupies a space in excess of 21,000 square feet, compared with 600 square feet for NASA's SSI.

Taft is now lobbying the space agency to pay for a system that can compete with Japan's ES. He sees no fundamental reason why such a computer could not be built in the United States; several vendors have already expressed interest. SGI's Origin SSI architecture, for instance, "scales extraordinarily well," he says, and a 4,096-processor SSI machine--which he estimates would cost about $60 million--could outperform the ES on several major US climate-modeling applications, the ES's bailiwick. "Within three years we could run circles around the ES," he predicts.

VISUALIZE THE FUTURE Goh also discussed SGI's visualization and collaboration tools. The company's vision, so to speak, revolves around its Reality Centers (RCs), like the one at the heart of DBI's Visualization Studio. "Visualization is the common language that allows people from different backgrounds, training, and expertise to engage in an immediately productive working session," SGI notes on its Web site. "People can work together to solve problems, understand phenomena, and plan future actions."

The goal is to advance both research and collaboration. It is easier, says Goh, for a group to study a model--be it a protein structure or airframe--on a large display than it is when everybody is crowded around a small screen. But often, researchers have outside collaborators who cannot attend meetings in the RC. For these scientists, the company has developed software so that collaborators, whether in another RC half-a-world away, or on a laptop computer in a hotel room, can participate in--and even control--the simulation in real time.

But come to think of it, asks Goh, why should remote users be tethered to a computer at all? The company's researchers are currently working with Microvision (www.mvis.com), to drive a visualization device that uses a low-power laser to directly draw images on the user's retina. The user need only clip it to a pair of eyeglasses.

So maybe the next time you attend a conference at the DBI, you won't even have to leave your office.

--Jeffrey M. Perkel

1. A. Adams, "Supercomputing in the life sciences," The Scientist, 16[17]:41-3, Sept. 2, 2002.