Posted on 12/15/2004 6:44:58 PM PST by BenLurkin
SAN JOSE, Calif. (AP) - Intel Corp. has reached an agreement to hire hundreds of Hewlett-Packard Co. engineers who helped design the Itanium microprocessor, a massive joint project between the two technology companies since the early 1990s. As a result, all Itanium processor design work will now be done entirely within Intel, though HP on Wednesday announced it plans to invest more than $3 billion over the next three years to continue its commitment to the chip.
The HP team, which is based in Fort Collins, Colo., will not have to relocate, said Intel spokesman Robert Manetta. Other terms of the agreement, which will be formally announced Thursday, were not released.
Itanium was conceived as a processor for high-end computers like servers and workstations. Specifically, it was supposed to displace Reduced Instruction Set Computing processors from International Business Corp. and Sun Microsystems Inc.
But after nearly a decade of development, Itanium was launched in 2001 just as the tech bubble had popped and demand for high-end servers stagnated. Lower-priced chips like those used in personal computers also were becoming powerful enough to capture some of the business.
Still, the RISC server market remains extremely lucrative, estimated to have about $20 billion a year in sales.
"The addition of this world-class design team to Intel's industry leading capabilities will further strengthen the product line," said Paul Otellini, Intel's chief operating officer.
HP's investment will cover research and investment, server and system software design, applications as well as sales and marketing, the companies said.
---
On the Net:
Intel Itanium: http://www.intel.com/itanium
AP-ES-12-15-04 2005EST
Do you like threads?
Oh, those kind...to be honest, I'm so busy down at the physical level of the chip, I don't have much opinion on some architecture issues these days. Life at ultra-deep submicron keeps me busy enough... :-(
A big blue monster?
Bump !!
Spoilsport. Does it have intials that stand for "I've Been Mugged"? ;)
Howdy Cuz!
LOL I am sitting in my big blue cubicle even now.
If you're in Vermont, it's time to go home!
If you can give me a clue to your ID, I can verify you on Blue Pages, and maybe tell you a little about what we're doing.
You have freep mail.
I think AMD did very well with its 64-bit extensions to the x86 ISA. IBM also did well with its PPC 970 chip that's designed from scratch to do 32- and 64-bit natively with no performance hit in either mode. The Itanium is a different monster, a 64-bit chip that has to run 32-bit code in emulation -- a disaster when general computing is still mostly 32-bit (although they do well in supercomputers).
Also, the AMD 64 chips give you double the number of general purpose registers when in 64-bit mode, up to 16 (plus eight 128-bit XMM/SSE registers). I remember the Unreal Tournament developers saying that this alone gave them a huge jump in speed.
I think AMD did very well with its 64-bit extensions to the x86 ISA.
Ah, you're talking about architectual issues, and I'm talking about physical issues.
Just imagine a nice medium size burg. Then imagine it, if all the roads and paths were made with double the width. Assuming that the lot sizes were the same, the town would grow, would it not?
And when you are driving your car at the same speed as before (i.e. the limits of physics), it takes you longer to go from one corner of the bigger town to the other corner, right?
It's the same way in chips, especially if they are constrained not by block size, but by wiring considerations. And of course, those blocks are going to grow in size, because now they have to process 64 bits, not 32 - some will double in size.
Now, the fabs do nice things like give us physical designers more layers to play with, and if we lay out things nicely, we minimize the cost of these double-wide bit lanes. But bigger they are, and bigger = lower frequency.
On the other hand, now you can gain some speed back by taking advantage of 64 bit calculations in your OS and applications. So in some cases it's a wash, or if 64-bit calcs really speed things, up, a win. But it still has lower frequency than a smaller design.
Then I guess it's nice that going to 64 is usually accompanied by a process shrink. Moore just said we could double the transistors. He didn't necessarily say what we had to do with those transistors: use the same amount and get more yield per wafer (and often more clock speed), or grow the number of transistors we use per chip to get more abilities with the same yield.
So this time you lose some clock in favor of abilities. It's a good tradeoff for those who want it.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.