Skip to comments.[Intel] "New" Celeron dictated by marchitecture, not architecture
Posted on 07/12/2002 3:15:33 PM PDT by JameRetief
Column Heigh-Ho Celeron!
By Thursday 11 July 2002, 19:14
See Intel to shift P4 Celerons to Northwood core
The origin of the Celeron, for those of you more recently come to the party, was a marketing coup for Intel, whereby they managed to create an artificial segmentation of their CPU market. First string Pentium chips from the outset were always engineered for whatever performance the engineers could squeeze out of their fabs, with perhaps a lead time of up to a year of newer and faster dies warming shelves as the clock speed increases were managed and measured out.
Historically the reason that Intel had competitors in the X86 CPU business had to do with IBM requirements. IBM from the outset of the PC business was worried about establishing Intel as a monopoly for supplying processors, and required that Intel lower IBM's risk in the supply of processors by licensing a number of "second source" manufacturers to ensure an abundant supply. Back in the 1980s IBM could dictate such terms to Intel - because IBM, for all practical purposes, created the PC market.
The Origin of the Celeron
Celerons were intentionally crippled, descending - at least spiritually, from the intentionally crippled SX series of 386 and later 486 CPU chips. Intel thus managed to wing two birds with one stone. First by fulfilling it's contractual commitments for providing X86 technology to "second source" CPU manufacturers: Harris, AMD, et al, without sharing the math "extensions", and second by relegating the competition to producing chips that competed at Intel's "low end". As a bonus, competitors' CPU chips still had sockets for optional Intel math co-processors.
For a time the strategy brought Intel some small licensing revenues from the second source manufacturers, bolstered by a healthy large hit for every sell-through of an Intel math chip. By some strange coincidence, after a time, once competitors started showing up in the math-co-processor space with good and sometimes better products (especially Cyrix), Intel made the space go away.
Celeron's Enormous Success Story - Class Warfare
Intel used its de-tuned Pentium chip to create an opportunity to establish a pecking order. Intel marketing succeeded in equating AMD, CYRIX (now VIA) CPUs, and other competitors CPU chips like WinChip, with its CELERY (cheap) product lines. This artificial segmentation of the market allowed Intel to price a premium line of chips at a very healthy premium price. Celeron chips on the other hand could be sold without as much regard to profit. In fact aggressive Celeron pricing was used as a cudgel to beat the "second source" CPU makers to death by denying them profits. Indeed while AMD has stubbornly refused to die, Cyrix, WinChip, Harris, and others are no longer in the X86 CPU business, and indeed many of the original second source companies no longer even exist, except perhaps for some of their intellectual properties and patent licences surviving in other companies like VIA.
Intel's class segmentation strategy worked. Through excellent advertising, press management, and other marketing tricks, Intel managed to make create the illusion that instead of a performance spectrum, there were in fact "classes" of CPUs. The myth is propagated to this day, and almost all of the Intel competitors are dead, or challenged to make a profit.
Athlon, Duron, Celeron, Pentium
AMD's Athlon now just a couple of years old, threatened to open class warfare, and has largely exposed Intel's artificial stratification. When AMD released its Duron series of "economy" chips - crippled slightly - in obvious imitation of the Pentium/Celeron duo, the inevitable parallels between Duron and Celeron were drawn. Yet there was some bewilderment in the market as to whether AMD's flagship Athlon was indeed worthy of the first class accommodations formerly occupied only by the Pentium families.
Doubts about the worthiness of the AMD chips were encouraged by servants of the godzilla of chips, and businesses, many of which are MBMA (Management By Magazine Article) to this day have resisted change, or taken the King's Shilling largely defending the existing Intel domination within "business class" computing.
Pesky things though, like benchmarks, were showing for many operations and uses, Intel was trailing and eating AMD's dust.
Intel marketing folks were in a bind with the release of the Pentium 4. The marketing team found they had to wait for and count on Intel engineers to provide new ammunition. Worse still - the best ammunition available for a time was nothing more than a frequency - the now fabled "gigahertz" comparison. Given a poor stick, the Intel marketing team set about trying to beat the enemy to death with it, and though the blows weren't lethal, they did leave a mark on AMD. (Some question whether the marks are bruises, or bark off the stick still arise.)
In any case, Intel had lost the Celeron advantage. Only the very best Pentium chips threatened AMD's Athlons. AMD Durons - essentially crippled Athlons, weren't crippled as much as Celerons were crippled as compared to their Pentium brethren. With Durons outperforming many Pentiums on benchmarks, the Intel faithful were facing a world-view crisis.
Hammer, Itanium, Opteron, Yamhill
Intel has been telling us for over a decade that a shift away from the old X86 instruction set was coming, and it would catapult dosktops and server into the 64bit space. AMD has concurred on the 64 bits, but it has opted to extend the compatability of X86. AMD leveraging the near-endless delays associated with Intel's 64 bit desktop/server chip the Merced/Itanium, has scuttled any possibility of Intel changing the general computing model away from X86 instructions.
Intel literally has been hammered by an AMD chip that isn't even released yet. Itanium has wholly failed as a desktop strategy, and may face a revenue disaster for the project considering the enormous R&D costs to date. Intel's hedge strategy, the Yamhill project rumoured to be compatible with AMD's Hammer has been vilified by Intel's senior management, though work likely proceeds - just in case.
Moreover, AMD Hammer designs are sporting excellent improvements to 32 bit X86 execution. So spectacular are these hammer improvements that AMD has adjusted its roadmap to incorporate the enhancements in their 32 bit cores later this year, or early '03.
Intel Re-takes the Performance Crown
Although some benchmarks still clearly show AMD performance to lead Intel in raw performance, Intel's push for higher frequencies has paid off over the last two quarters. Recent Pentium 4 CPUs are showing enough benchmark wins to regain the performance crown from AMD's Athlons.
As a result, Intel has a window of opportunity to try put the genie back into the bottle, and attempt to once again equate AMD with the Celeron class of second-rate computing.
To this end, we see the news of a new Celeron, based on a Northwood core, with a 400Mhz front-side bus. This new Celeron is far less crippled than any previous Celeron, and allows the comparison once more to be drawn between AMD chips and the cheaper Celerons.
Sporting 533Mhz and faster front-side bus interfaces on the "premium" Pentium 4's, Intel can soon point to both the raw frequencies and the memory access speeds as class differentiators.
The Celeron Opportunity Window
With Hammer coming out in a few months, there isn't a large window of opportunity to paint AMD into the value aisle. Hammer improvements in 32 bit execution may once again challenge Intel's leadership at the performance level. But in case Intel can equate AMD with the cheap/value end of computing it is worth their while to try... like the dickens.
So expect the marchitecture folks at Intel to drive the engineering department to goose the Celerons, and push the journos to equate AMD chips with Celerons and the value segment.
Intel's marketing task may be futile with the AMD rah-rah squads, but Intel is really aiming at the MBMA Crowd.
Can Intel keep the wool over the eyes of corporate purchasers?
Stay tuned! µ
That's not entirely true, at least for the second set of Celerons - the ones running faster than 300 MHz. A great many of them were P-III's that failed their L2 cache test.
Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard.
With IA-64 (Itanium/Itanium2), Intel has provided hardware level mechanisms to specify which endian model should be used when accessing data, either big-endian or little-endian.
However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.
That's useful for emulation purposes. The PowerPC has similar features, but normally operates in big-endian mode.
However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.
If true, and they can get the technology to market, AMD could be facing a few more bruises.
I am aware of precisely two advantages of big-endian architectures:
Itanium has NEVER been a desktop strategy. It was developed for the SERVER market usage where speed of information transfer is more important.
In my opinion, the worst disadvantage of little-endian format is handling bitmapped data (e.g. graphics buffers and masks) - the word orders must be rearranged with each access to get spatially coherent data. This is a performance issue, and one of the main reasons x86 processors are not preferred for graphically-intensive applications.
If I recall correctly, big-endian order can also perform better on longword shifting, symbol hashing, lookup tables, etc.
The problem with graphics is simply that whoever designed early video boards decided to make them clock out the most significant bit first. Whether this was because the early graphics board designs were for big-endian computers, or whether it was because it made the graphics look like their conventionally-printed binary or hex representation I don't know. My guess would be that early character-based designs clocked out the MSB of the character ROM first for the above reasons, and because the bit order used there didn't have any effect whatsoever on the rest of the system. When people first designed bitmap displays, they could easily have made them clock out LSB first but at the time there was no reason to do so.
BTW, if performance rather than compatibility is the goal, designing a graphics system to clock out LSB first would be trivial. Alternatively, the address counter could be changed to count in reverse sequence. It might seem a little odd to have 0,0 be the lower-right corner of the screen, but if the code is written for that I see no reason to expect any performance disadvantages to such a system.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.