Posted on 03/22/2007 12:59:16 PM PDT by Swordmaker
"WHEN I STARTED WRITING this column back in 1992, the world of personal technology was positively primitive compared with where we stand today. So armed with the benefit of 15 years of hindsight, and in this final installment of the Mossberg Report, I'd like to take a look back on the distance we've traveled in personal technology over the past decade and a half, as well as make a few predictions about where things might be headed," Walt Mossberg writes for SmartMoney (Feb. 1, don't know how we missed it. By the way, the article's date on the SmartMoney site is currently incorrect, Mr. Mossberg tells MacDailyNews. It should be 2007, not 2006).
MacDailyNews Note: This morning, we asked Mr. Mossberg why it was the final installment of the Mossberg Report and he told us that he has decided to drop this particular column due mainly to increasing demands on his time from his D Conference (http://d.wsj.com) and a new web site launching next month which is an extension of the D Conference, called All Things Digital. See http://www.allthingsd.com for a preview. Mr. Mossberg said, "I am still writing two WSJ columns weekly (Personal Technology and Mossberg's Mailbox), and editing (and occasionally authoring) a third weekly WSJ column, The Mossberg Solution (where our Apple TV review ran yesterday). I am still co-producing my annual tech conference, and now will be co-producing a new tech web site. I just won't be doing the magazine column."
"In 1992 the Internet wasn't available to the general public. There were no iPods or any other portable digital music players. Cellphones were big, bulky and analog, mainly used in cars in the U.S. The first consumer digital cameras had just arrived: crude models that cost $800, worked only in black and white, and held just 32 images," Mossberg writes.
Mossberg writes, "Microsoft was offering the clumsy Windows 3.1, which seemed to crash if you sneezed, and many people were still using the geeky and limited DOS operating system on their 'IBM-compatible' PCs. Apple's technology was way ahead of Windows, but the company would soon enter a period of management mediocrity and product paralysis."
"By around 2001, when the current major operating systems, Windows XP and Apple's Mac OS X, made their debuts, personal technology had vastly improved. Many of the rough edges had been sanded off. The Mac had long been 'plug and play,' and Windows was much closer to that goal. Both systems were fairly stable," Mossberg writes.
Mossberg writes, "There had been viruses for many years, of course, including some big attacks in the 1990s. But over the past five years, the security problem has morphed into a major hassle for people who own and use Windows computers... Today, warding off the myriad threats online takes more and more time, money and effort than ever before. You have to run multiple security programs, interpret all their warnings and alerts, tell them what to do when they detect suspicious activity, and consistently update them. It's a real hassle, one that seriously interferes with the productivity, and the pleasure, computers can and should provide."
"In fact, the burden of using a Windows computer is higher now than it was in 2001. By contrast, Apple's Macintosh is easier to use than ever, partly because it has so far remained free from viruses, spyware and adware except for a few minor cases. After stagnating in the mid-'90s, Apple's software and hardware are once again markedly superior to those of Windows PCs," Mossberg writes.
Are you keeping up with the FR switchers, Swordmaker? Every day there seems to be 1 or 2 more.
Take a deep breath, Jeff.
You're just having a minor FUD attack.
Everything will be alright.
But it's very nice hardware, for a fair price.
If what you want anyway closely matches an available Mac hardware model, that may well be your best choice. But if you have other specific needs, then usually you can get a better fit in the commodity PC hardware space.
If price is a key selection criteria, that "better fit" can include a lower price, in which case your hardware might not be as nice as Mac hardware, but it may meet your needs just as well, if not better, for less money.
For a couple of my relatives who just need simple documents, email and web, I build systems for a few hundred dollars, plus monitor and printer. I could do that with a Mac Mini, but it would cost two to three hundred more.
For my teenage son and his friends, I build gaming systems for perhaps a thousand to fifteen hundred dollars each. I could do that with a Mac Pro, booted into Windows (serious games are exclusively Windows), for perhaps an extra thousand dollars.
The same applies on the high end -- for my day job, I work on ten thousand to ten million dollar systems, built mostly from commodity PC hardware ... very select, very leading edge, industrial strength, commodity hardware. In another window, I just logged off a system with 2048 CPUs and 4 terabytes of memory (RAM, not disk). It was running a single Linux boot, not a cluster. Apple is not one of the vendors we deal with.
I am having a bit of trouble that it is not a cluster. It seems to me that even with re-entrant libraries, a single Linux boot would be tripping all over itself, just with HD access speed, with library calls from 2048 processors unless there is some kind of distributed library system. Can you provide more information?
If I recall correctly, the Virginia Tech Apple G5 Cluster of 2200 processors has a total of 17.6 Terabytes of RAM. There are a couple of larger Apple clusters that have more. I read the US Army's COLSA MACH5 Apple G5 cluster has 25 TB of RAM.
One wouls hope that the overpriced accessories sold by the computer's manufacturer would of course be plug and play...
It's what's called a cache coherent NUMA architecture.
One benchmark I saw go by just now was scaling at over 90 per-cent efficiency.
Every Unix command and system call sees just one kernel; there is just one kernel.
We've been doing some -serious- scaling of the main stream Linux kernel the last few years. Look for the configuration parameter CONFIG_NR_CPUS in the stock kernel that comes from Linux, in the various arch/*/configs/*_defconfig files. You will see values ranging from 2 to 1024, for various hardware architectures.
That means, for that hardware, you can run the kernel on a system with that many CPUs, if you can scrape together that much hardware and that much money for the kick ass interconnects that it takes to run stuff like this at speed.
Yes - I know what a cluster is, including some clusters that support shared memory and such transparently across the nodes.
This is not a cluster. Though we'd be happy to sell you a cluster as well. The main difference between a cluster and a big box like this is the interconnects. Running a cache coherent single system memory image across this much memory requires some seriously fat and fast wires, and some nasty routers. The memory controller chip, what would be the north bridge in an ordinary PC, is rather different too - not a commodity part.
For jobs that are sufficiently parallelizable, clusters are more economical. For jobs that are only coded to run in a single big memory, you need serious NUMA iron like this.
And for when they do have to go to disk, to get the input data or write the results, they are going to some major storage arrays.
Let's see... those "overpriced accessories" would be:
Every one of them was recognized and mounted by the OS when it was plugged in. The only one that required special software to be installed after it was recognized and mounted was my PDA... but I could still synch my Mac Addressbook, iCalendar, and notes without it.
Interesting, how are you getting up to 2048 CPU, is it virtual? The largest SSI I've heard about are 1024, and that wasn't long ago. This probably isn't just straight "Linux" either is it, don't you need some proprietary SSI software from IBM or SGI, etc to provide the boot layer? I've heard about OpenSSI but it's nowhere close to this is it? You're right about the programming advantages though, I've heard programming for the clusters can be difficult, and why so many stick with their old Cray or IBM Mainframe.
I beg to differ:
Wink, wink, nudge, nudge.
Cheers!
What was the benchmark? A numerical kernel or an application code?
What language was it written in?
Hint: drool drool. :-)
Do you have a Mac or a Windows computer?
Eh - sorry - I can't publish details of benchmarking ahead
of the proper process ;).
I own both... and am a cross platform consultant for businesses and individuals who run networks of either Windows PCs or Macs... but my by far preferred platform is a Mac.
I would get one, but I hear it's hard to get games for them, and they don't have all the games that a Windows would have.
You are aware that Macs can now run all Windows software, aren't you? Boot into Windows XP and run any game you want.
These machines are put to good use, by the good guys.
Thanks, I'm just concerned about it being open source, which IMO makes it easier for anyone, including the bad guys, to get it. My understanding of the reason why some believe that should be accepted is their belief the "open" development environment allows for faster advancement of technology and capability, which is possibly true but then again becomes a concern of proliferation.
If the envelope or bleeding edge of technology is being shared by everyone, why couldn't the bad guys be just as likely to hold a superior advantage over you at any point in the future, since you're already giving them equal footing? Sorry to put you on the spot since you already shared as much as you did, but in the dangerous world in which we live I've never really understood why this sort of technology isn't being better protected.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.