Posted on 04/20/2002 7:19:58 PM PDT by UnBlinkingEye
Japanese supercomputer takes world's fastest title from US
A new Japanese supercomputer has taken the title of world's fastest away from the US.
The NEC Earth Simulator processes data five times faster than its closest competitor.
It works at a speed of 35,600 gigaflops compared to its closest rival, IBM's ASCI White, which runs at a speed of 7,226 gigaflops.
A gigaflop equals a billion mathematical operations per second.
The NEC Earth Simulator is as large as four tennis courts and creates a "virtual planet Earth" to predict climate patterns.
Jack Dongarra, a University of Tennessee computer science professor, leads the group of researchers that tracks the world's 500 speediest computers.
"This machine is more powerful than the 20 fastest computers in the US," Mr Dongarra said. "It's more powerful than all of the Department of Energy and Department of Defence computers together."
Today's most popular supercomputers use a massively parallel processing system, in which thousands of mass-produced microprocessors are linked to solve complex problems.
NEC calls the Earth Simulator a "vector parallel" computer, which combines parallel processing with vector processing.
The TOP500 list is compiled by researchers at the University of Mannheim in Germany and the University of Tennessee. The list ranks computers by their performance on Dongarra's Linpack Benchmark, a standardised measure of a computer's speed at solving a "dense system of equations.
Story filed: 00:33 Sunday 21st April 2002
If Lexus, Infiniti and Acura are indicators, yes it can.
Maybe. But then again, maybe not. I wouldn't be surprised if the US had a number of super-advanced goodies under wraps.
I ask because we have the Cheetah at the Oak Ridge National Lab and it's a 4 teraflop system. The system has 1 TByte of memory and 24 Tbyte of HD.
it's program can predict weather patterns over the next 24 hours to 90% accuracy.
on the downside, the program takes 72 hours to run.
yes, i'm kidding.
However, around 6 months ago, Cray negotiated a deal in which they would be the sole distributor of NEC supercomputers in the USA, as well as receive a lump sum distribution from NEC, and thus petitioned congress to once again allow NEC to market and sell their supercomputers domestically.
The reason Cray did this, was that in the field of supercomputers, there is a awful lot of trash-talk as far as specs goes. IBM claims to have a theoretical peak performance of 7.22 GFlops. Yes, sure and I have the theoretical capability to lift a greyhound bus, and throw it through a building. If you take anybody's PC, and benchmark it on some appliation; do you triple the applications performance if it runs simultaneously on 3 PC's? No, of course not. If you wire all 3 PC together, and they all work together, you may pick up some performance, but nowhere near 300%. So, bear with me ... when IBM takes 1,000 Pentium boxes, wires them together and then announces to the world that the 'theoretical performance' of the machine is 1,000x the performance of a single PC.... that's a lie.
To do this, you need to have a special chip architecture, along with a highly customized data distribution methodology (sorry, Gigabit or 100 BaseT isn't gonna even come close).
I work in this industry, and the claims are a bunch of poo. What should be of concern, is actual SUSTAINED performance, not theoretical peak...
However, around 6 months ago, Cray negotiated a deal in which they would be the sole distributor of NEC supercomputers in the USA, as well as receive a lump sum distribution from NEC, and thus petitioned congress to once again allow NEC to market and sell their supercomputers domestically.
The reason Cray did this, was that in the field of supercomputers, there is a awful lot of trash-talk as far as specs goes. IBM claims to have a theoretical peak performance of 7.22 GFlops. Yes, sure and I have the theoretical capability to lift a greyhound bus, and throw it through a building. If you take anybody's PC, and benchmark it on some appliation; do you triple the applications performance if it runs simultaneously on 3 PC's? No, of course not. If you wire all 3 PC together, and they all work together, you may pick up some performance, but nowhere near 300%. So, bear with me ... when IBM takes 1,000 Pentium boxes, wires them together and then announces to the world that the 'theoretical performance' of the machine is 1,000x the performance of a single PC.... that's a lie.
To do this, you need to have a special chip architecture, along with a highly customized data distribution methodology (sorry, Gigabit or 100 BaseT isn't gonna even come close).
I work in this industry, and the claims are a bunch of poo. What should be of concern, is actual SUSTAINED performance, not theoretical peak...
Teraflops is the winner, of course we are talking about computations per second.
I ask because we have the Cheetah at the Oak Ridge National Lab and it's a 4 teraflop system. The system has 1 TByte of memory and 24 Tbyte of HD.
So this system is capable of processing one teraflop (one second) of information and storing twenty four seconds?
I hope they use good input data here, otherwise I smell another global warming scam.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.