In what has become a closely watched event in the world of high performance computing (HPC), the 24th edition of the Top500 list yesterday crowned IBM’s BlueGene system the world’s fastest supercomputer.
BlueGene set a record Linpack benchmark performance of 70.72 teraflops, or trillions of calculations per second. BlueGene offers peak performance of 5.7 teraflops.
IBM formally introduced the eServer BlueGene to the market yesterday, marking a boon for Linux, the operating system on which the world’s fastest supercomputer runs.
BlueGene is closely followed by the Columbia system, which also runs Linux. SGI built the Columbia system that clocked in at 51.87 teraflops.
U.S. Takes Leadership Role
Both systems represent new technologies that enabled U.S. manufacturers and research institutions to regain the top spots from NEC’s Earth Simulator supercomputer, which is now securely in the number three spot.
Jack Dongarra, a computer science professor at the University of Tennessee and one of the compilers of the Top500 list, told LinuxInsider that this represents the U.S. taking back its leadership role in HPC.
“The Japanese Earth simulator was at the top of the list for the past two and a half years,” Dongarra said. “There are now two machines that are faster than the Earth Simulator and they are both in the U.S.
“One of the machines is actually twice as fast. This is a leap frog in thecapabilities of high performance computing.”
Fast, Faster, Fastest
These supercomputers are getting faster and faster each year. The entry level for the Top 10 moved from 1.922 gigaflops to 2.026 teraflops.
The number of systems exceeding the 1 teraflop mark jumped from 242 to 399, and list organizers said they expect the next list in six months will include only systems exceeding 1 teraflop.
It could be called a new HPC era for business and scientific uses. However, what does all this speed really mean? What is the promise of this new high-speed technology? What’s the hope beyond the hype?
Need for Speed
While the Department of Energy will use BlueGene to understand and better protect U.S. weapons stockpiles, and NASA will use the Columbia system to understand and prevent major space shuttle catastrophes, there are plenty of other uses emerging.
Dongarra said these supercomputers could help to solve some of the most challenging problems that face science today and directly impact the economy.
“These machines can be used for weather predictions,” he said. “If we had better methods of predicting hurricanes, then we could save a tremendous amount of money.”
Dongarra said it costs an estimated US$1 million to evacuate one mile of coast and without a precise prediction of where hurricanes will make landfall, more people are evacuated than necessary, translating to millions of dollars wasted.
“There are also very challenging medical problems these high performance computers can be used to solve,” Dongarra said. “By doing certain types of scans, we are able to see inside the body to locate tumors and can guide doctors to perform surgery on very precise locations.”
The future possibilities seem virtually endless as performance rates ofsupercomputers are doubling every 18 months, according to the Top500compilers.
Hans Meuer of the University of Mannheim, Germany, Erich Strohmaier and Horst Simon of NERSC/Lawrence Berkeley National Laboratory, and Dongarra compiled The Top500 list.