Advertisement
AAAS
AAAS

Supercomputer Installations Gearing Up For Next Decade

WASHINGTON-In the beginning there was an idea. And the idea was good: The National Science Foundation would bring high-performance computing to the scientific masses through a national network of supercomputing centers. Although some feared the network would cater to the computational elite, the five centers created in 1985 have now emerged at the crest of an extraordinary electronic revolution that promises to open wide the doors of numerical simulation to scientists in nearly every discipline.

By | August 7, 1989

WASHINGTON-In the beginning there was an idea. And the idea was good: The National Science Foundation would bring high-performance computing to the scientific masses through a national network of supercomputing centers. Although some feared the network would cater to the computational elite, the five centers created in 1985 have now emerged at the crest of an extraordinary electronic revolution that promises to open wide the doors of numerical simulation to scientists in nearly every discipline.

At a computational frontier where once only physicists and engineers dared tread, medical researchers, biologists, and even sociologists now flock. What was long dismissed as mere "number crunching" has, in the last half of the 1980s, become universally accepted as an invaluable tool for probing that which nature obscures. Its applications range from the complexities of synthetic molecules to the glacial evolution of galaxies.

In May, the NSF propelled four of the centers into the next decade with a renewal of their five-year funding at a level 40% higher than in the past. Over that period the foundation will spend about $14 million annually on each center. Such an investment, in conjunction with funding from industry, local governments, and other sources, will buy the next generation of supercomputers, storage devices, and high-speed network equipment.

While the centers have learned much in their first five years, many important questions remain. Will the centers keep pace with demand? How can supercomputing be made more useful to the average scientist? And how will the centers remain at the forefront of supercomputer-based research?

Satisfying the scientific community's voracious appetite for massive processing power-has emerged as a top concern. The scientific clamor for more computer time and more data threatens soon to overwhelm the electronic networks that allow researchers to access supercomputers from their own desks. NSFNET, the nationwide network that connects the supercomputer centers and 250 other research institutions, was upgraded to handle 1.5 million bits per second (megabaud) last year, but already it is nearing saturation.

In congressional testimony in June, NSF assistant director for computer and information science William Wulf said that "traffic on the [existing] networks is growing by 20% to 40% per month, doubling every six months....It's unlikely that [current] capacity will be adequate in another year."

As serious as the situation is now, it has been even worse in the past. As recently as 1987, recalls Ralph Roskies, codirector of the University of Pittsburgh/Carnegie-Mellon University supercomputer center, "we had to maintain a bank of sixteen '800' dial-up numbers for our users, because the networks were unusable. Whatever bandwidth you put out there," Roskies says, referring to a common measure of network capacity, "you'll be swamped. Even an upgraded NSFNET will inevitably fill," he says, as more scientists tap the power of three-dimensional simulation-perhaps the most data-intensive of all supercomputing pursuits.

Providing a growing community of supercomputer users with enough raw computing power is another responsibility that figures prominently in the centers' upgrade plans. All four intend to increase their parallel-processing capability in the next five years.

The Cornell center, which, unlike most of the others, is not based on Cray machines, is the farthest along in parallel supercomputing. Its twin IBM 3090 computers are designed for six-way parallelism. While that is relatively mainstream compared to the massive parallelism of the 64,000-processor Connection Machine, for example, the IBMs are better suited for many conventional applications, says center director Malvin Kalos.

"My own interest in parallel computing remains unquenched. I believe the future lies in highly parallel machines," Kalos says. Toward that end, the Cornell center expects to buy one of the first machines from Supercomputer Systems Inc., the startup firm founded by ex-Cray wunderkind Steve Chen. The company plans to release a 64-processor supercomputer in 1992.

Not all the lessons learned over the past five years involve supercomputers per se. Center officials have discovered that scientists who were once content with the mainframe-world austerity of "dumb" terminals featuring simple text displays now demand the user-friendliness of typical graphics-based PCs and workstations.

"It's not enough to give [researchers] a FORTRAN prompt anymore," says Cornell's Siegel. "Before, the feeling was that 'real' supercomputer users should work with the tools that are available, no matter how crude. Now, that is intolerable to a community that often has a Macintosh on its desk."

A new Cornell project called "The Scientist's Workbench" aims to allow supercomputer users to interact graphically-using a mouse or other pointing device-with their programs as they run. One advantage of such real-time interactions is that parameters can be changed and conditions modified to correct a simulation that maybe speeding toward an uninteresting result. That capability, which is taken for granted in PCs, is still rare in the "submit-and-wait" world of supercomputer use, despite a growing concentration on graphics at all the centers.

Indeed, a snowballing interest in interactive and 3-D graphics has even spawned a new field: "visualization"-the art of presenting data in such a way that the researcher can see it as a picture or a movie, rather than simply numbers (The Scientist, Feb. 6, 1989, page 1). The National Center for Supercomputer Applications (NCSA) at the University of Illinois dominates the field, although all the centers have visualization programs.

Most scientists agree that visualization will play a large part in future supercomputing. But some are concerned that progress in the field has been stifled by a dearth of innovative thinking. "I think most of the visualization solutions so far have been relatively pedestrian," says David Caughey, the former acting director of the Cornell center. What's still needed, he says, is for visualization tools to become so easy and intuitive to use that scientists who are not already steeped in supercomputer culture can try their hand, hopefully coming up with new approaches. However, the demand for more and better graphics will only increase the need for faster networks, officials warn. "NSFNET is far from unusable as is," says Cornell's Kalos, "but it could grow to become unusable as the demand for graphics increases." A solution may lie in a congressional proposal introduced this spring by Sen. Albert Gore (D-Tenn.) that calls for a $2 billion, 3 billion-bit-per-second (gigabaud) NSF network by 1996.

"The hardware is racing ahead, and the software is moving quickly to take advantage of these new machines," Gore told the House Science, Research, and Technology subcommittee in June. "Here in Congress, we have to make sure that the policy does not lag too far behind."

Another issue facing the centers as they enter their second half-decade is the recent split of Cray into two separate and rival companies. Cray Computer Corp., headed by founder Seymour Cray, will continue work on the innovative Cray-3 design, which uses unique gallium-arsenide semiconductors, while Cray Research Inc. will manufacture the C-90 supercomputer, a more conventional design. Although the gallium-arsenide chips promise to eventually work much faster than existing silicon-based semiconductors, problems with dissipating heat have delayed the project.

"The Cray split makes us rethink things," says NCSA deputy director James Bottum. "We were leaning toward a Cray 3, but now we want to look more closely at its delivery schedule," he says.

Future debate is expected not only on hardware choices such as those offered by the Crays and a host of competing parallel designs, but also on software standards. One of the painful lessons learned since 1985, at least for-one center, is that individuality is not always encouraged, especially when it comes to system software.

The University of California, San Diego center, which is operated by General Atomics, a contractor with strong ties to the Department of Energy, chose the DOE-standard CTSS operating system for its Cray X-MP in 1985. Unfortunately, most of the other centers chose UNICOS, a Cray operating system based on Unix, which has become an industry standard. After NSF criticism, the San Diego center has agreed to switch operating systems, a long and time-consuming process that requires rewriting much of the center's software. "The problem is converting quickly to keep up with the users," says deputy director Wayne Pfeiffer.

NSF sensitivity toward standards has been heightened by the sobering history of the John von Neumann facility near Princeton, N.J. (The Scientist, May 15, 1989, page 1). The center is the only one to use computers made by ETA Systems, a troubled firm that lost more than $100 million before it closed its doors in April. The von Neumann center's productivity has been crippled by poor software support from ETA, and it was the only one of the original five not to be renewed by NSF in May. The agency has postponed its decision on refunding the center until later this month, while center officials draw up plans to replace the ETAs with Cray supercomputers.

Follow The Scientist

icon-facebook icon-linkedin icon-twitter icon-vimeo icon-youtube
Advertisement
Panasonic
Panasonic

Stay Connected with The Scientist

  • icon-facebook The Scientist Magazine
  • icon-facebook The Scientist Careers
  • icon-facebook Neuroscience Research Techniques
  • icon-facebook Genetic Research Techniques
  • icon-facebook Cell Culture Techniques
  • icon-facebook Microbiology and Immunology
  • icon-facebook Cancer Research and Technology
  • icon-facebook Stem Cell and Regenerative Science
Advertisement
Thermo Scientific
Thermo Scientific
Advertisement
Life Technologies