Erica P. Johnson

The young field of proteomics has quickly risen to match physics and meteorology in its huge appetite for computational capacity. According to Sylvie Langevin, development group manager of Montreal-based proteomics firm Caprion Pharmaceuticals, the computer processing and data handling requirements of proteomics exceed those of genomics by a factor approaching one million. "In genomics we have files of several kilobytes in size, but in proteomics we're closer to 1 [gigabyte]," Langevin says.

Fortunately proteomics problems, such as three-dimensional visualization of protein structures and searching for matching sequences, can readily be broken up into small components that can be processed separately on different computers and then reassembled at the end. "In the trade we call them 'embarrassingly parallel,'" says Brian Carter, head of life science solutions for IBM in the United Kingdom.

GRID LINES This makes proteomics ripe for grid computing, the fast-emerging computational model for sharing resources...

Interested in reading more?

Magaizne Cover

Become a Member of

Receive full access to digital editions of The Scientist, as well as TS Digest, feature stories, more than 35 years of archives, and much more!
Already a member?