Posted on 09/22/2009 8:57:14 PM PDT by Ernest_at_the_Beach
Moore's Law will keep going strong for decades, Intel CTO Justin Rattner predicts.
Why we're hard-wired to ignore Moore's Law
Read the Intel CTO's take on why machines could ultimately match human intelligence
Moore's Law, in force for more than 40 years, says that the number of transistors that can be placed on an integrated circuit will double every 18 to 24 months. Predictions of the demise of Moore's Law are routinely heard in the IT world, and some organizations are trying to find a replacement for silicon chip technology. But Rattner says that silicon has plenty of life left and said there is no end in sight for Moore's Law.
"If Moore's Law is simply a measure of the increase in the number of electronic devices per chip, then Moore's Law has much more time to go, probably decades," Rattner said in an interview with Network World.
(Excerpt) Read more at networkworld.com ...
fyi
Isn’t this the whole reason they went to multi core CPUs? They know they cant keep up moore’s law forever so they are hedging their bets and providing themselves a way to increase computing power without needing a die shrink every 20 months.
I expect in 50 years we will see home desktop computers with 25 CPUs and a hundred cores each...with RAM and solid state harddrives integrated onto the CPU circuitry...and wireless modems integrated onto CPUs as well.
Cooling will be by thermoelectric refrigeration(the Peltier effect) and will also be integrated onto the CPU circuitry.
The problem isn’t in the architecture or the OS. How do you make it aware? You can reduce a complex situation into horribly complex equations and then make a program that solves them. If human innovation follows a logical, if complex process, that process has eluded us. Without a thorough understanding of the creative process, it will be impossible to make a machine that can mimic it.
Yes and no.
There are several other architectural reasons for going to multi-core products, starting with issues of heat, clock speed, how deep they can set up a successful pipeline, memory bandwidth, etc.
At some point, the race for ultimate clock speed became counter productive. OK, so you have a couple choices at that point, and at least one of the choices involves leaving the x86 instruction set behind.
Intel ain’t about to do that. That isn’t an option.
So with that degree of freedom removed, you have to figure out how to make the x86 instruction set scale upwards in speed.
Are insects creative? start there, and work your way up the food chain. I don’t think it’s so impossible. At some point the wall between hardware and software will probably need to be taken down...somehow.
Stands to reason...someday they will have no choice BUT to leave the x86 pattern of doing things.
Die shrinks and additional cores help with computing progress. A single-core Intel “Prescott” at 90nm used approx 115W at peak load. Now you have a processor (45nm “Nehalem”) with 4 much more efficient cores that use the less than that amount of power that provide 6X+ the computational power.
Well, I'm guessing that since they are still talking about copper for the foreseeable future, that it never panned out.
I seriously doubt they will.
There have been several architectures with better designs that have been thumped rather soundly in the market (eg, PowerPC, MIPS) because they didn’t have the huge base of software on them that the x86 does.
Don’t get me wrong: I don’t like the x86 architecture. To me, the x86 architecture is every bit as infuriating as the S/370 architecture was. The only difference was that you could get more useful work done on the 370.
But thanks to Microsoft, there is simply no getting around the fact that there is a huge base of s/w out there now that needs x86 execution - and users want this code executed *fast*. Look at how many years there was “Virtual PC” available on PPC Macs? And the Mac didn’t take off.
Out come Apple with x86 Macs, full-speed execution of x86 code and Apple machines take off in sales. This is just one example.
Intel was reaching a really hard point when they got to the Pentium D - it was screamin’ hot, increases in the clock speed were yielding increasingly marginal increases in performance, etc. And along comes the Israeli Intel group who creates what you now see as the “Core” technology, in effect, creating a RISC architecture for the “real” chip and then creating a “x86 interpreter” running on top of the “real” chip. They broke the x86 instruction set away from the actual internal chip architecture and are now effectively emulating it on the chip.
Using this kind of thinking, Intel can push the x86 instruction set quite far into the future, and adding additional opcodes will allow them to simply extend it as they’ve done with 64-bit operations.
Like it or not (and I don’t, but now I’m in a position where I no longer really care), the x86 instruction set is here to stay, probably for a while.
A few years ago I read about a guy who did a bunch of behavioral experiments with house flies. His assessment: "In some ways they're incredibly smart, but in other ways they're incredibly stupid."
I've always looked forward to advancement in the study of insect neurology. I heard the challenge back in the sixties," Let's see them build a gnat." In the nineties, I used to look at the on-line site, Atlas of the Fly Brain, but it seems to have fallen into disuse.
There's been work in insect vision involving the "optical flow" concept that has yielded results, but nothing to really crack open the mystery of the insect brain. Where and when will the breakthrough come?
Impossibility is an argument for preachers and philosophers. Personally, I believe that intelligence is more than the use of logic. Baye’s Theorem aside, what are the logical leaps that men made to arrive at our understanding of the universe?
Quantum mechanics is one such leap. A machine could analyze black body radiation and conclude that X energy produces Y radiation. But can it examine it’s own premise and conclusion and realize that Y is an impossible answer given the formulae we programmed?
It could do that if it had a list of impossible answers, but where does it get that list? We provide it. Can you write a program to extrapolate: x=x: y=y: z=z; x<>y: x<>z; y<>z.
Therefore x+y<>x+z.
We can tell it those simple terms in those simple equations, but we have to tell it how to translate those principles into Yankees, Red Sox, Orioles. Perhaps there is an equation to compute it, but we don’t understand the equation, so we can’t program it into a machine. And even if we do, how does the machine develop the response, “I don’t care about baseball. Let’s talk about hockey.”
“Read the Intel CTO’s take on why machines could ultimately match human intelligence”
I think they eventually would, thanks to our brain dead pop culture and dumbed down curricula such as whole language and Everyday Math. But imagine if we took some of the billions of dollars that are going into improving machine intelligence and used it to improve human intelligence.
“Stands to reason...someday they will have no choice BUT to leave the x86 pattern of doing things.”
they tried, remember?
Hal? Is that you? It’s me, Dave.
Good.
Moore's Law is what's letting them put multiple CPUs on the same chip.
I always preferred, “What would a computer do with a lifetime supply of chocolate?”
Find some way to generate electricity with it.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.