Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

[Intel] "New" Celeron dictated by marchitecture, not architecture
The Inquirer ^ | July 11, 2002 | Andrew Busigin

Posted on 07/12/2002 3:15:33 PM PDT by JameRetief

Column Heigh-Ho Celeron!

By Andrew Busigin: Thursday 11 July 2002, 19:14

INTEL'S LATEST developments around future Celeron performance make for an interest study in marketing. It appears that once again, Intel Marketing folks are steering the engineering team, and it shouldn't surprise anyone, since the Celeron has always been a marketing phenomenon more than an engineering product.

See Intel to shift P4 Celerons to Northwood core

History Lesson

The origin of the Celeron, for those of you more recently come to the party, was a marketing coup for Intel, whereby they managed to create an artificial segmentation of their CPU market. First string Pentium chips from the outset were always engineered for whatever performance the engineers could squeeze out of their fabs, with perhaps a lead time of up to a year of newer and faster dies warming shelves as the clock speed increases were managed and measured out.

Historically the reason that Intel had competitors in the X86 CPU business had to do with IBM requirements. IBM from the outset of the PC business was worried about establishing Intel as a monopoly for supplying processors, and required that Intel lower IBM's risk in the supply of processors by licensing a number of "second source" manufacturers to ensure an abundant supply. Back in the 1980s IBM could dictate such terms to Intel - because IBM, for all practical purposes, created the PC market.

The Origin of the Celeron

Celerons were intentionally crippled, descending - at least spiritually, from the intentionally crippled SX series of 386 and later 486 CPU chips. Intel thus managed to wing two birds with one stone. First by fulfilling it's contractual commitments for providing X86 technology to "second source" CPU manufacturers: Harris, AMD, et al, without sharing the math "extensions", and second by relegating the competition to producing chips that competed at Intel's "low end". As a bonus, competitors' CPU chips still had sockets for optional Intel math co-processors.

For a time the strategy brought Intel some small licensing revenues from the second source manufacturers, bolstered by a healthy large hit for every sell-through of an Intel math chip. By some strange coincidence, after a time, once competitors started showing up in the math-co-processor space with good and sometimes better products (especially Cyrix), Intel made the space go away.

Celeron's Enormous Success Story - Class Warfare

Intel used its de-tuned Pentium chip to create an opportunity to establish a pecking order. Intel marketing succeeded in equating AMD, CYRIX (now VIA) CPUs, and other competitors CPU chips like WinChip, with its CELERY (cheap) product lines. This artificial segmentation of the market allowed Intel to price a premium line of chips at a very healthy premium price. Celeron chips on the other hand could be sold without as much regard to profit. In fact aggressive Celeron pricing was used as a cudgel to beat the "second source" CPU makers to death by denying them profits. Indeed while AMD has stubbornly refused to die, Cyrix, WinChip, Harris, and others are no longer in the X86 CPU business, and indeed many of the original second source companies no longer even exist, except perhaps for some of their intellectual properties and patent licences surviving in other companies like VIA.

Intel's class segmentation strategy worked. Through excellent advertising, press management, and other marketing tricks, Intel managed to make create the illusion that instead of a performance spectrum, there were in fact "classes" of CPUs. The myth is propagated to this day, and almost all of the Intel competitors are dead, or challenged to make a profit.

Athlon, Duron, Celeron, Pentium

AMD's Athlon now just a couple of years old, threatened to open class warfare, and has largely exposed Intel's artificial stratification. When AMD released its Duron series of "economy" chips - crippled slightly - in obvious imitation of the Pentium/Celeron duo, the inevitable parallels between Duron and Celeron were drawn. Yet there was some bewilderment in the market as to whether AMD's flagship Athlon was indeed worthy of the first class accommodations formerly occupied only by the Pentium families.

Doubts about the worthiness of the AMD chips were encouraged by servants of the godzilla of chips, and businesses, many of which are MBMA (Management By Magazine Article) to this day have resisted change, or taken the King's Shilling largely defending the existing Intel domination within "business class" computing.

Pesky things though, like benchmarks, were showing for many operations and uses, Intel was trailing and eating AMD's dust.

Intel marketing folks were in a bind with the release of the Pentium 4. The marketing team found they had to wait for and count on Intel engineers to provide new ammunition. Worse still - the best ammunition available for a time was nothing more than a frequency - the now fabled "gigahertz" comparison. Given a poor stick, the Intel marketing team set about trying to beat the enemy to death with it, and though the blows weren't lethal, they did leave a mark on AMD. (Some question whether the marks are bruises, or bark off the stick still arise.)

In any case, Intel had lost the Celeron advantage. Only the very best Pentium chips threatened AMD's Athlons. AMD Durons - essentially crippled Athlons, weren't crippled as much as Celerons were crippled as compared to their Pentium brethren. With Durons outperforming many Pentiums on benchmarks, the Intel faithful were facing a world-view crisis.

Hammer, Itanium, Opteron, Yamhill

Intel has been telling us for over a decade that a shift away from the old X86 instruction set was coming, and it would catapult dosktops and server into the 64bit space. AMD has concurred on the 64 bits, but it has opted to extend the compatability of X86. AMD leveraging the near-endless delays associated with Intel's 64 bit desktop/server chip the Merced/Itanium, has scuttled any possibility of Intel changing the general computing model away from X86 instructions.

Intel literally has been hammered by an AMD chip that isn't even released yet. Itanium has wholly failed as a desktop strategy, and may face a revenue disaster for the project considering the enormous R&D costs to date. Intel's hedge strategy, the Yamhill project rumoured to be compatible with AMD's Hammer has been vilified by Intel's senior management, though work likely proceeds - just in case.

Moreover, AMD Hammer designs are sporting excellent improvements to 32 bit X86 execution. So spectacular are these hammer improvements that AMD has adjusted its roadmap to incorporate the enhancements in their 32 bit cores later this year, or early '03.

Intel Re-takes the Performance Crown

Although some benchmarks still clearly show AMD performance to lead Intel in raw performance, Intel's push for higher frequencies has paid off over the last two quarters. Recent Pentium 4 CPUs are showing enough benchmark wins to regain the performance crown from AMD's Athlons.

As a result, Intel has a window of opportunity to try put the genie back into the bottle, and attempt to once again equate AMD with the Celeron class of second-rate computing.

To this end, we see the news of a new Celeron, based on a Northwood core, with a 400Mhz front-side bus. This new Celeron is far less crippled than any previous Celeron, and allows the comparison once more to be drawn between AMD chips and the cheaper Celerons.

Sporting 533Mhz and faster front-side bus interfaces on the "premium" Pentium 4's, Intel can soon point to both the raw frequencies and the memory access speeds as class differentiators.

The Celeron Opportunity Window

With Hammer coming out in a few months, there isn't a large window of opportunity to paint AMD into the value aisle. Hammer improvements in 32 bit execution may once again challenge Intel's leadership at the performance level. But in case Intel can equate AMD with the cheap/value end of computing it is worth their while to try... like the dickens.

So expect the marchitecture folks at Intel to drive the engineering department to goose the Celerons, and push the journos to equate AMD chips with Celerons and the value segment.

Intel's marketing task may be futile with the AMD rah-rah squads, but Intel is really aiming at the MBMA Crowd.

Can Intel keep the wool over the eyes of corporate purchasers?

Stay tuned! µ


TOPICS: Business/Economy; Editorial; News/Current Events; Technical
KEYWORDS: 32bit; 64bit; amd; athlon; celeron; fsb; hammer; intel; pentium4; techindex
Navigation: use the links below to view more comments.
first 1-2021-22 next last

1 posted on 07/12/2002 3:15:33 PM PDT by JameRetief
[ Post Reply | Private Reply | View Replies]

To: SlickWillard
Ping
2 posted on 07/12/2002 3:33:46 PM PDT by KayEyeDoubleDee
[ Post Reply | Private Reply | To 1 | View Replies]

To: JameRetief
Celerons were intentionally crippled...

That's not entirely true, at least for the second set of Celerons - the ones running faster than 300 MHz. A great many of them were P-III's that failed their L2 cache test.

3 posted on 07/12/2002 3:48:31 PM PDT by general_re
[ Post Reply | Private Reply | To 1 | View Replies]

To: JameRetief
Never trust a little-endian CPU.
4 posted on 07/12/2002 4:27:35 PM PDT by HAL9000
[ Post Reply | Private Reply | To 1 | View Replies]

To: HAL9000
What difference does that make?
5 posted on 07/12/2002 5:14:54 PM PDT by Paleo Conservative
[ Post Reply | Private Reply | To 4 | View Replies]

To: JameRetief
Celeron: The Ford Escort of CPUs.
6 posted on 07/12/2002 5:16:56 PM PDT by Timesink
[ Post Reply | Private Reply | To 1 | View Replies]

To: Paleo Conservative
Little-endian architecture is a relic of the 70s that was intended to maintain a degree of compatibility with 8-bit processors. Now it is just an ugly, inefficient kludge.

Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard.

7 posted on 07/12/2002 5:40:27 PM PDT by HAL9000
[ Post Reply | Private Reply | To 5 | View Replies]

To: *tech_index; Ernest_at_the_Beach
.
8 posted on 07/12/2002 5:50:49 PM PDT by Libertarianize the GOP
[ Post Reply | Private Reply | To 1 | View Replies]

To: HAL9000
Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard.

With IA-64 (Itanium/Itanium2), Intel has provided hardware level mechanisms to specify which endian model should be used when accessing data, either big-endian or little-endian.

However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.

9 posted on 07/12/2002 6:19:01 PM PDT by Lorenb420
[ Post Reply | Private Reply | To 7 | View Replies]

To: Lorenb420
With IA-64 (Itanium/Itanium2), Intel has provided hardware level mechanisms to specify which endian model should be used when accessing data, either big-endian or little-endian.

That's useful for emulation purposes. The PowerPC has similar features, but normally operates in big-endian mode.

However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.

Yuck.

10 posted on 07/12/2002 6:31:10 PM PDT by HAL9000
[ Post Reply | Private Reply | To 9 | View Replies]

To: JameRetief
I'm a big fan of AMD's chips' performance and view them as generally at least the equal of Intel's in that regard.

For mission-critical applications, however, the robustness of Intel's chips in the face of bad stuff happening is significantly ahead of AMD's.
11 posted on 07/12/2002 6:51:50 PM PDT by RightOnTheLeftCoast
[ Post Reply | Private Reply | To 1 | View Replies]

To: JameRetief
I read somewhere that Intel had a breakthrough and is running 10-gigahertz chips in the lab with no heating problem.

If true, and they can get the technology to market, AMD could be facing a few more bruises.

--Boris

12 posted on 07/12/2002 8:36:07 PM PDT by boris
[ Post Reply | Private Reply | To 1 | View Replies]

To: JameRetief
What is Intel going to do when 80 or 90 percent of their business is desktop computing? They can't seem to get itanium accepted in mainstream desktop computing. One thing that intel excels at... chip fabrication. Wouldn't it be funny if they eventually got degraded into being an AMD chip fab ;) Or worst... an AMD cloner. ahaHAHA!
13 posted on 07/12/2002 8:49:27 PM PDT by aSkeptic
[ Post Reply | Private Reply | To 1 | View Replies]

To: HAL9000
Little-endian architecture is a relic of the 70s that was intended to maintain a degree of compatibility with 8-bit processors. Now it is just an ugly, inefficient kludge.

I am aware of precisely two advantages of big-endian architectures:

These benefits are counterbalanced by a couple of significant benefits of little-endian architecture: So out of curiosity, what's "wrong" with little-endian architectures that make big-endian ones better?
14 posted on 07/12/2002 10:17:58 PM PDT by supercat
[ Post Reply | Private Reply | To 7 | View Replies]

To: JameRetief
Itanium has wholly failed as a desktop strategy....

Itanium has NEVER been a desktop strategy. It was developed for the SERVER market usage where speed of information transfer is more important.

15 posted on 07/12/2002 10:26:22 PM PDT by Pistolshot
[ Post Reply | Private Reply | To 1 | View Replies]

To: Pistolshot
The Merced/Itanium was developed for both server and desktop use. The reason Intel's 64 bit processors are not yet being promoted for the desktop is the lack of software (i.e. no 64 bit Windows OS). Intel intended to use the Itanium to begin transitioning the desktop market to 64 bit, but until the OS is available they will continue to concentrate on the server side.

Now, having said that, in the server market the Itanium has not faired well. They are having trouble selling them, and even Dell, the top PC seller for Intel, has decided not to use the Itanium II in their server systems.
16 posted on 07/12/2002 10:53:12 PM PDT by JameRetief
[ Post Reply | Private Reply | To 15 | View Replies]

To: supercat
Those are some interesting points about extended-precision data formats. I hadn't thought about those, but they make sense.

In my opinion, the worst disadvantage of little-endian format is handling bitmapped data (e.g. graphics buffers and masks) - the word orders must be rearranged with each access to get spatially coherent data. This is a performance issue, and one of the main reasons x86 processors are not preferred for graphically-intensive applications.

If I recall correctly, big-endian order can also perform better on longword shifting, symbol hashing, lookup tables, etc.

17 posted on 07/12/2002 10:57:19 PM PDT by HAL9000
[ Post Reply | Private Reply | To 14 | View Replies]

To: JameRetief
[i]Cyrix, WinChip, Harris, and others are no longer in the X86 CPU business[/i]

Cyrix lives on in the National Semiconducter Geode processor line, The Winchip still soldiers on and is do very well as the basis of the Centaur C5 or as it is better known the VIA C3. However it was breifly known as the Cyrix III until VIA figured out that Cyrix wasn't a name associated with quality.

And Harris still manufacturers speciality IA processors such as the 386, 486 and liscensed 6x86,

I know because I use all of the above on a daily basis
18 posted on 07/12/2002 11:06:25 PM PDT by ContentiousObjector
[ Post Reply | Private Reply | To 1 | View Replies]

To: Libertarianize the GOP; JameRetief
Thanks for the ping!

Good article!

19 posted on 07/12/2002 11:15:51 PM PDT by Ernest_at_the_Beach
[ Post Reply | Private Reply | To 8 | View Replies]

To: HAL9000
In my opinion, the worst disadvantage of little-endian format is handling bitmapped data (e.g. graphics buffers and masks) - the word orders must be rearranged with each access to get spatially coherent data. This is a performance issue, and one of the main reasons x86 processors are not preferred for graphically-intensive applications.

The problem with graphics is simply that whoever designed early video boards decided to make them clock out the most significant bit first. Whether this was because the early graphics board designs were for big-endian computers, or whether it was because it made the graphics look like their conventionally-printed binary or hex representation I don't know. My guess would be that early character-based designs clocked out the MSB of the character ROM first for the above reasons, and because the bit order used there didn't have any effect whatsoever on the rest of the system. When people first designed bitmap displays, they could easily have made them clock out LSB first but at the time there was no reason to do so.

BTW, if performance rather than compatibility is the goal, designing a graphics system to clock out LSB first would be trivial. Alternatively, the address counter could be changed to count in reverse sequence. It might seem a little odd to have 0,0 be the lower-right corner of the screen, but if the code is written for that I see no reason to expect any performance disadvantages to such a system.

20 posted on 07/12/2002 11:42:17 PM PDT by supercat
[ Post Reply | Private Reply | To 17 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-22 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson