Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: ThePythonicCow
In another window, I just logged off a system with 2048 CPUs and 4 terabytes of memory (RAM, not disk). It was running a single Linux boot, not a cluster.

I am having a bit of trouble that it is not a cluster. It seems to me that even with re-entrant libraries, a single Linux boot would be tripping all over itself, just with HD access speed, with library calls from 2048 processors unless there is some kind of distributed library system. Can you provide more information?

If I recall correctly, the Virginia Tech Apple G5 Cluster of 2200 processors has a total of 17.6 Terabytes of RAM. There are a couple of larger Apple clusters that have more. I read the US Army's COLSA MACH5 Apple G5 cluster has 25 TB of RAM.

24 posted on 03/23/2007 6:32:56 PM PDT by Swordmaker (Remember, the proper pronunciation of IE is "AAAAIIIIIEEEEEEE)
[ Post Reply | Private Reply | To 23 | View Replies ]


To: Swordmaker
It's a single system image - off the shelf SUSE SLES 10, with just a few patches. Without any patches, we can boot that kernel, right off the SUSE DVD, on a 1024 CPU system.

It's what's called a cache coherent NUMA architecture.

One benchmark I saw go by just now was scaling at over 90 per-cent efficiency.

Every Unix command and system call sees just one kernel; there is just one kernel.

We've been doing some -serious- scaling of the main stream Linux kernel the last few years. Look for the configuration parameter CONFIG_NR_CPUS in the stock kernel that comes from Linux, in the various arch/*/configs/*_defconfig files. You will see values ranging from 2 to 1024, for various hardware architectures.

That means, for that hardware, you can run the kernel on a system with that many CPUs, if you can scrape together that much hardware and that much money for the kick ass interconnects that it takes to run stuff like this at speed.

Yes - I know what a cluster is, including some clusters that support shared memory and such transparently across the nodes.

This is not a cluster. Though we'd be happy to sell you a cluster as well. The main difference between a cluster and a big box like this is the interconnects. Running a cache coherent single system memory image across this much memory requires some seriously fat and fast wires, and some nasty routers. The memory controller chip, what would be the north bridge in an ordinary PC, is rather different too - not a commodity part.

For jobs that are sufficiently parallelizable, clusters are more economical. For jobs that are only coded to run in a single big memory, you need serious NUMA iron like this.

26 posted on 03/23/2007 7:54:06 PM PDT by ThePythonicCow (The Greens steal in fear of pollution, The Reds in fear of greed; Fear arising from a lack of Faith.)
[ Post Reply | Private Reply | To 24 | View Replies ]

To: Swordmaker
Folks running this class of hardware stay away from the disks during the main computation phases like fighter pilots avoid dirt roads.

And for when they do have to go to disk, to get the input data or write the results, they are going to some major storage arrays.

27 posted on 03/23/2007 7:57:43 PM PDT by ThePythonicCow (The Greens steal in fear of pollution, The Reds in fear of greed; Fear arising from a lack of Faith.)
[ Post Reply | Private Reply | To 24 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson