Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

LANL dedicates 'Q' supercomputer: To maintain the nation's nuclear-weapons stockpile.
The Santa Fe New Mexican ^ | May 18, 2002 | JEFF TOLLEFSON/The New Mexican

Posted on 05/18/2002 11:33:46 AM PDT by Ernest_at_the_Beach

Los Alamos National Laboratory on Friday formally dedicated
a new supercomputer called "Q," billed as the next step in the
U.S. Department of Energy's efforts to
maintain the nation's nuclear-weapons stockpile.
The $215 million computer, complete with its own $93-million building, is only partially installed, but lab officials say the machine should have a peak capacity of more than 30 trillion operations per second once it is fully operational later this year. Compaq - recently acquired by Hewlett-Packard - is building the machine. Officials said Q would be the second-fastest supercomputer in the world because Japan recently unveiled a machine capable of 40 trillion operations per second.

Ultimately, the supercomputer is destined to run weapons codes simulating nuclear explosions as part of a larger effort to understand and maintain existing nuclear weapons - and perhaps test potential modifications under consideration by the current administration - without actually exploding them. In observance of an international treaty that has yet to be officially ratified, current U.S. policy does not allow for full-scale nuclear tests.

Joined by U.S. Sen. Pete Domenici, R-N.M., and National Nuclear Security Administration head John Gordon on Friday, the lab also dedicated the 303,000-square-foot Nicholas C. Metropolis Center for Modeling and Simulation, named after one of the original Manhattan Project scientists who died in 1999.

At the core of the building is a 43,500-square-foot computer room, roughly three-quarters the size of a football field and big enough to hold two supercomputers at once in case of future replacements. Below is a massive air-conditioning system that pushes cold air through the floor to keep the room at 65 degrees. Exhaust stacks on the side of the building exhale hot air. The building also holds more than 300 offices along with gathering rooms and a small theater where researchers can watch their weapons codes play out in three-dimensional simulations.

"It's been built for supercomputer computing and to help people who use those tools, primarily, in this case, our weapons designers," said John Bretski, director of the Metropolis building project. Bretski said the facility will house theoreticians, experimentalists and computer scientists who try to translate physical concepts into codes for the supercomputer.

An equal amount of interpretation is necessary at the other end, which is where the 3-D simulations come in. Lab officials passed out goggles so journalists could watch sample 3-D models, including a colorful and detailed splash simulation of an asteroid striking the Pacific Ocean (one month's computer calculations went into less than a minute's visual).

"Understanding what comes out of these ... machines requires a visual approach, and we are looking at various aspects of doing that," said the lab's Bob Tomlinson.

DOE instituted its Advanced Simulation and Computing Program in 1995 with the goal of developing a computer capable of completing 100 trillion operations per second by 2004, which means more supercomputers are to come. Officials say they are now aiming for 2005. Lawrence Livermore Laboratory is building another supercomputer center, while Los Alamos officials say the Metropolis Center was designed to hold much more powerful supercomputers in the future.

John Morrison, leader of the Computing, Communications & Networking Division, said that kind of computational power is needed to run the weapons codes satisfactorily. Only recently, he said, it took eight months to compute the first complete weapons code, which ran largely on the current computer at Lawrence Livermore National Laboratory.

Although lab and DOE officials maintain this kind of research plays a crucial role in maintaining reliable nuclear weapons, the stockpile stewardship program, as it's known, has a host of critics both locally and nationally who say a back-to-basics approach might be much more effective - and less costly.

Greg Mello of the Los Alamos Study Group says the DOE is dumping money into a program that is full of unanswerable questions: How can anyone be sure that a mathematical calculation and its result truly represent real-world physics in any one of the thousands of bombs in the U.S. arsenal?

"If a big computer was needed, then how many of them do we need? I say one," Mello said Friday, referring to an apparent leap-frog effect in supercomputer acquisitions by national laboratories. "And there's a problem with the codes. The whole idea of doing this requires a lot of other elements to come together, and it's not clear that the other elements are coming together."

The program would be better off conducting strict examinations of existing weapons and replacing parts as any problems are encountered, Mello argued.

The DOE Office of Inspector General reported last year that DOE is failing to keep up with the standard annual certification process for weapons, a task that it called the "first line of defense" in stockpile stewardship.


TOPICS: Extended News; Foreign Affairs; Government; News/Current Events; Technical
KEYWORDS: atomicweapons; bush; computers; defense; miltech; nationallabs; nukes; techindex

1 posted on 05/18/2002 11:33:46 AM PDT by Ernest_at_the_Beach
[ Post Reply | Private Reply | View Replies]

To: tech_index; Mathlete; Apple Pan Dowdy; grundle; beckett; billorites; ErnBatavia...
To find all articles tagged or indexed using tech_index

Click here: tech_index

2 posted on 05/18/2002 11:35:29 AM PDT by Ernest_at_the_Beach
[ Post Reply | Private Reply | To 1 | View Replies]

To: Ernest_at_the_Beach;*miltech
Interesting
3 posted on 05/18/2002 11:51:17 AM PDT by Free the USA
[ Post Reply | Private Reply | To 2 | View Replies]

To: Ernest_at_the_Beach
I thought Al Gore invented the super computer.
4 posted on 05/18/2002 12:17:09 PM PDT by BulletBrasDotNet
[ Post Reply | Private Reply | To 1 | View Replies]

To: Ernest_at_the_Beach
...but lab officials say the machine should have a peak capacity of more than 30 trillion operations per second once it is fully operational later this year....

but can you watch porn, that is the question.........

5 posted on 05/18/2002 12:23:32 PM PDT by varon
[ Post Reply | Private Reply | To 1 | View Replies]

To: Ernest_at_the_Beach
Would-you-like-to-play-a-game?
6 posted on 05/18/2002 12:34:08 PM PDT by Conservative til I die
[ Post Reply | Private Reply | To 1 | View Replies]

To: Ernest_at_the_Beach
And how secure from spies and traitors will this beast be?
7 posted on 05/18/2002 1:21:25 PM PDT by remaininlight
[ Post Reply | Private Reply | To 1 | View Replies]

To: Conservative til I die
Another helping of pork, please.
8 posted on 05/18/2002 1:28:38 PM PDT by Lessismore
[ Post Reply | Private Reply | To 6 | View Replies]

To: Ernest_at_the_Beach
Is this a cluster or a supercomputer? Or are the two terms interchangable now?
9 posted on 05/18/2002 1:34:24 PM PDT by LarryLied
[ Post Reply | Private Reply | To 1 | View Replies]

To: LarryLied
It is a gathering of many small processors electronically interconnected in various schemes. Memory arranged in exotic ways with high speed INTER-connections!

The terms have evolved greatly since the days of the Main Frame!

Guess I will go look and see what they are doing with this one!

10 posted on 05/18/2002 1:41:33 PM PDT by Ernest_at_the_Beach
[ Post Reply | Private Reply | To 9 | View Replies]

To: LarryLied
ASCI Q will take up more than 21,000 square feet, roughly the size of five basketball courts, at the Los Alamos facility. Expected to become fully operational by 2002, ASCI Q holds nearly 12,000 processors and will be able to perform in one day the calculations that would take a personal computer 60 years.

Lot's of processors. I don't know if they are Alpha's or Intel's!

Alpha's would make it a more serious machine!

I'll look for more!

11 posted on 05/18/2002 1:46:11 PM PDT by Ernest_at_the_Beach
[ Post Reply | Private Reply | To 10 | View Replies]

To: LarryLied
The contract, worth more than $150 million by industry estimates, will fund a computer with thousands of Alpha processors that will be built in 2002 at Los Alamos National Laboratory (LANL), sources familiar with the plan said. The machine, part of DOE's Accelerated Strategic Computing Initiative (ASCI), is expected to be able to perform 30 trillion calculations per second, or 30 teraflops.
12 posted on 05/18/2002 1:49:57 PM PDT by Ernest_at_the_Beach
[ Post Reply | Private Reply | To 11 | View Replies]

To: Ernest_at_the_Beach
Thanks. The last holder of the record, ASCI White is a cluster composed of 8,192 microprocessors.Not a techie but this stuff is fascinating. Kids can build clusters with processing power which was on the munitions list of items not allowed to be exported because of national security just a few years ago.
13 posted on 05/18/2002 1:50:21 PM PDT by LarryLied
[ Post Reply | Private Reply | To 10 | View Replies]

To: LarryLied

Richard Kaufmann: The Q Supercomputer and Compaq

Article #60171, November 7 2000
Reproduced, with permission, from HPCwire.

Dallas, Texas -- On August 22, 2000 the U.S. Department of Energy's (DOE) National Nuclear Security Administration (NNSA) selected Compaq to build the world's fastest and most powerful supercomputer, a 30+ TeraOPS system code-named 'Q'-the latest advancement in the Accelerated Strategic Computing Initiative (ASCI).

Richard Kaufmann, Technical Director for Compaq Computer Corporation's High Performance Technical Computing Group, recently took the time to speak with HPCwire about Q and the state of high performance computing at Compaq.

~ ~ ~ ~ ~

HPCwire: Tell us about some of the challenges of developing a system to meet the ASCI requirements for the 30+ TeraOPS Q supercomputer.

KAUFMANN: The Q system will be the largest computer in the world when installed. It will push on all system parameters: scheduling, I/O, system management, MTBF, etc. It will be based on our AlphaServer SC technology. SC systems are installed at CEA (France), LANL, LLNL, ORNL, Pittsburgh Supercomputer Center, and others.

Compaq's strategy (with guidance from Los Alamos) is to build Q out of approximately 375 of Compaq's largest servers, the 32-CPU AlphaServer GS320. The GS320 is shipping to customers now, and has proven to be a very stable platform. The Q will use some planned upgrades for the GS320, available late 2001. The upgrades include faster CPUs and an updated I/O subsystem. Hardware stability is key to a successful deployment of Q, so using a solid server is very important.

Q will use the next version of the interconnect fabric from our AlphaServer SC series. The main changes are a different, faster host interface and increased network link bandwidth. Q will have eight rails (each rail is a separate network plane), and each GS320 will have at least 4 GB/s of message passing bandwidth.

375 nodes is a lot of nodes! Unless, of course, you compare Q with the NSF TCS-1 system to be installed at the Pittsburgh Supercomputing Center in 2001. Their system will consist of more than 600 quad-processor servers, and will be built with two rails of the AlphaServer SC fabric.

Q has a strong I/O requirement. An application must be able to dump all 12 TB of memory to a global file store in ten minutes. This translates to more than 20 GB/s of sustained parallel file system write bandwidth! We're using a request forwarding mechanism to move I/O from the compute nodes to the file servers. This forwarding technology will first appear in the next few months in AlphaServer SC version 2.0, and we expect to spend significant time on this mechanism to ensure proper scaling.

We're fortunate to have an incredibly solid compiler. It is easy to forget that modern processors are totally dependent on very smart compilers (not to mention compiler writers!), and Compaq's Gem compiler technology has been one of our secret weapons. Some new technology will be added in Q's timeframe to help with "NUMA" memory topologies. There will be a sneak peek of this technology during Jonathan Harris' talk at Supercomputing.

There are nearly 12,000 Alpha processors in Q, and all of them will want to send MPI messages to each other. The virtual DMA technology of the AlphaServer SC fabric makes this practical, but we expect to spend a significant amount of time tackling application scaling. This is one of the key areas where the researchers at LANL will be working with us.

There are many other challenges in building Q! Just imagine this huge number of servers, each of which is six feet wide and five feet tall. There are more than 3,000 parallel fiber links from the servers to the switches, and an equal number of copper links tying together the cabinets for each switch. We're building a special Q outpost in our manufacturing plant just to handle the first stage of system integration. The logistics required for this effort are pretty impressive.

HPCwire: Tell us about Compaq's involvement with the Accelerated Strategic Computing Initiative (ASCI).

KAUFMANN: Compaq (then DEC) has worked with ASCI since the ASCI Blue procurement in 1995. Our first successful procurement was ASCI PathForward. Under the PathForward program, Jim Tomkins (Sandia), Karl-Heinz Winkler (LANL) and Mark Seager (LLNL) worked with us to accelerate our interconnect program. PathForward is directly responsible for our relationship with Quadrics Supercomputers World, the supplier of the interconnect fabric in AlphaServer SC, and we wouldn't have been able to bid on the 30T system without it.

Our discussions with the DoE labs over the past five years have profoundly influenced every aspect of our system design. Personnel from the DoE labs (as well as some other advanced customers, such as the French CEA) are involved in our design decisions right at the point of conception; quite often these researchers know about our future designs well before many of our own engineers!

HPCwire: What can you tell us about how the National Nuclear Security Administration (NNSA) will use Q?

KAUFMANN: Q will be used to push the state of the art of scientific modeling and simulation. The DoE will use the system to provide responsible stockpile stewardship, in an environment where nuclear testing is no longer acceptable. This need for reliable simulation is pushing not just computer companies, but also legions of algorithm and code designers - at the DoE, universities, and ISVs.

HPCwire: Does Compaq plan to continue to support governmental agencies like NNSA? If so, which agencies?

KAUFMANN: Compaq has long-standing relationships with the NSF, the intelligence community, the DoD, NASA, and others. Our HPTC organization works closely with these U.S. agencies, as well as other organizations outside the U.S.

If anything, we're looking for ways to strengthen our relationships with these agencies. We call these accounts "lighthouse accounts" because we expect their needs to be a few years ahead of the wider market. It is working to satisfy the needs of these accounts that produces our technology for future generations of AlphaServer SC.

HPCwire: Will Compaq be building more 30+ TeraOPS systems, or will they be focusing on creating yet more powerful systems?

KAUFMANN: We're deeply involved in design discussions for 100 TF and beyond. However, we do expect to sell lots of systems in the 0.002 TF (aka a single CPU!) to the 30 TF range over the next few years. Just go to http://www.compaq.com/hpc, and have your credit card ready!

HPCwire: Compaq recently reported record revenue for its third quarter this year. Do you think that the selection of Q by the NNSA contributed to this?

KAUFMANN: It's our belief that the Q announcement (as well as some other major successes, described later) has sent a clear message that Compaq is in an incredibly strong position in HPTC. This undoubtedly has helped us in recent competitive situations, but significant revenue for the Q system itself won't hit Compaq until 2001 and 2002. It's great to work for a company that's firing on all cylinders. Everything from the handheld iPAQ to the 32-CPU AlphaServer GS320 are doing quite well, and the HPTC group is proud to do its part to help Compaq's financial success.

HPCwire: Is there any other news regarding High performance computing at Compaq that you would like to talk about?

KAUFMANN: This has been a great year for us! In addition to Q, we've won the largest supercomputer program in Europe, CEA, the largest civilian supercomputer, the NSF Terascale system with Pittsburgh Supercomputing Center, and very recently, the largest supercomputer in Japan, with the Japanese Atomic Research Institute. Our AlphaServers played a pivotal role in mapping the human genome earlier this year and at this moment are allowing researchers at Celera Genomics and many publicly-funded institutions to complete the annotation of the genome and publish their findings. Blue Sky Studios, part of Fox, selected us to deliver the most powerful computing facility in the entertainment industry last Spring. And in May, we began shipping our new GS series servers; high performance computing customers have been snapping them up at a record pace. So we're on a very strong roll.

Looking forward, Compaq's roadmap would be quite daunting to deliver, except for the excellent partners we've picked: Quadrics' Elan network is the fabric of the AlphaServer SC series. This fabric has given us a very strong performance and capacity boost. We're also lucky to be working with Etnus (TotalView), Pallas (Vampir SC), Platform Software (LSF), and Raytheon (integration help on Q).

By the way, if anyone out there would like to help, Compaq is busy hiring talented folks for its High Performance Technical Computing organization. Feel free to come and talk to us at Supercomputing!

About HPCwire

HPCwire is a weekly worldwide electronic journal on all aspects of the High Performance Computing industry. For further details or a free trial subscription, visit the HPCwire web site.

14 posted on 05/18/2002 1:57:25 PM PDT by Ernest_at_the_Beach
[ Post Reply | Private Reply | To 13 | View Replies]

To: Lessismore
which is where the 3-D simulations come in. Lab officials passed out goggles so journalists could watch sample 3-D models, including a colorful and detailed splash simulation of an asteroid striking the Pacific Ocean...
Hey! They could make movies in their spare time! Blow Lucas & Company right out of the Star War thriller genre! Pay for the computer(s?) with the proceeds. The tax-paying movie public could get some more thrills for their bills. You're happy. I'm happy. Everbuddy happy 'cept Lucas. Pass the popcorn!
15 posted on 05/18/2002 11:15:23 PM PDT by BlueDragon
[ Post Reply | Private Reply | To 8 | View Replies]

To: BlueDragon; Ernest_at_the_Beach
Nixed for Linux

By Meridith Levinson

THE MASTER OF ILLUSION in the entertainment industry, Industrial Light & Magic (ILM)—George Lucas's visual effects and 3-D animation studio—is undergoing its own metamorphosis and sloughing off longtime partner SGI in the process.

According to Andy Hendrickson, director of research and development, San Rafael, Calif.-based ILM is in the process of replacing its 600 Unix-based SGI O2 workstations—which it used to render such characters as the gauze-clad antagonist in The Mummy, the fearsome velociraptors in the Jurassic Park flicks and Jar-Jar Binks in Star Wars Episode I—with Pentium Four Linux machines. It is also replacing SGI's Unix-based Origin 2000 server with a combination of a Pentium Four computer and Alpha processors that will run Linux from Red Hat.

Hendrickson thinks Linux is the best operating system for the visual effects industry. "It builds distributed computing well. It has rock-solid stability, a very low administration cost, and many years of robustness and testing behind it," he says. ILM counts on its new system getting movies in the can quicker with lower production costs.

ILM's transformation doesn't bode so well for SGI of Mountain View, Calif., whose entire business, especially its workstation business, is ailing. In April, the company reported a $141 million loss and cut 15 percent of its workforce. Salomon Smith Barney downgraded SGI's stock from neutral to underperforming in July.

It's a tragic downfall for a company that once dominated the visual effects market, and a classic story of a sluggish industry giant blindsided by newcomers. "It's been a long and fruitful relationship," says Hendrickson of ILM's work with SGI. "[But] right now we need more power. We need more simulation. We need to do more work. They don't have a Pentium in their offering, which is unfortunate for them."

The irony here is that SGI has helped the Linux cause in the past by serving up some of its own proprietary Unix code to the open-source community. Now Linux seems to be biting the hand that fed it.

16 posted on 05/19/2002 6:14:39 AM PDT by Lessismore
[ Post Reply | Private Reply | To 15 | View Replies]

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson