Skip to comments.
[Intel] "New" Celeron dictated by marchitecture, not architecture
The Inquirer ^
| July 11, 2002
| Andrew Busigin
Posted on 07/12/2002 3:15:33 PM PDT by JameRetief
click here to read article
Navigation: use the links below to view more comments.
first 1-20, 21-22 next last
To: SlickWillard
Ping
To: JameRetief
Celerons were intentionally crippled... That's not entirely true, at least for the second set of Celerons - the ones running faster than 300 MHz. A great many of them were P-III's that failed their L2 cache test.
To: JameRetief
Never trust a little-endian CPU.
4
posted on
07/12/2002 4:27:35 PM PDT
by
HAL9000
To: HAL9000
What difference does that make?
To: JameRetief
Celeron: The Ford Escort of CPUs.
6
posted on
07/12/2002 5:16:56 PM PDT
by
Timesink
To: Paleo Conservative
Little-endian architecture is a relic of the 70s that was intended to maintain a degree of compatibility with 8-bit processors. Now it is just an ugly, inefficient kludge.
Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard.
7
posted on
07/12/2002 5:40:27 PM PDT
by
HAL9000
To: *tech_index; Ernest_at_the_Beach
.
To: HAL9000
Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard. With IA-64 (Itanium/Itanium2), Intel has provided hardware level mechanisms to specify which endian model should be used when accessing data, either big-endian or little-endian.
However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.
9
posted on
07/12/2002 6:19:01 PM PDT
by
Lorenb420
To: Lorenb420
With IA-64 (Itanium/Itanium2), Intel has provided hardware level mechanisms to specify which endian model should be used when accessing data, either big-endian or little-endian. That's useful for emulation purposes. The PowerPC has similar features, but normally operates in big-endian mode.
However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.
Yuck.
10
posted on
07/12/2002 6:31:10 PM PDT
by
HAL9000
To: JameRetief
I'm a big fan of AMD's chips' performance and view them as generally at least the equal of Intel's in that regard.
For mission-critical applications, however, the robustness of Intel's chips in the face of
bad stuff happening is significantly ahead of AMD's.
To: JameRetief
I read somewhere that Intel had a breakthrough and is running
10-gigahertz chips in the lab with no heating problem.
If true, and they can get the technology to market, AMD could be facing a few more bruises.
--Boris
12
posted on
07/12/2002 8:36:07 PM PDT
by
boris
To: JameRetief
What is Intel going to do when 80 or 90 percent of their business is desktop computing? They can't seem to get itanium accepted in mainstream desktop computing. One thing that intel excels at... chip fabrication. Wouldn't it be funny if they eventually got degraded into being an AMD chip fab ;) Or worst... an AMD cloner. ahaHAHA!
13
posted on
07/12/2002 8:49:27 PM PDT
by
aSkeptic
To: HAL9000
Little-endian architecture is a relic of the 70s that was intended to maintain a degree of compatibility with 8-bit processors. Now it is just an ugly, inefficient kludge. I am aware of precisely two advantages of big-endian architectures:
- Memory dumps may be read out more nicely than on little-endian architectures, though the concept of a 'memory dump' is in many cases practically obsolete for anyone but die-hard codeheads, for whom little-endian dumps shouldn't pose too much of a problem.
- Early-exit extended-precision magnitude comparisons may be performed more easily on big-endian than little-endian processors.
These benefits are counterbalanced by a couple of significant benefits of little-endian architecture:
- Extended-precision addition and subtraction may be performed more readily than with big-endian architectures, since it's necessary to generate the carries from the lower stages before operating on the upper ones. Note that except with very large numbers of what are expected to be widely varying magnitudes, doing a numerical comparison via an extended-precision subtraction will be faster than using the early-exit method on a big-endian processor.
- A pointer to a data item may be used to access that data item as any smaller size. While such techniques must be used with caution, they can sometimes aid in migration of code from shorter-word CPU's to larger ones. As a simple example, suppose I have a function which operates on a 16-bit integer pointed to by its argument. If such a function is passed a pointer to a 32-bit integer, it will still work correctly provided the integer's correct value starts and remains in the range -32768 to 32767. By contrast, on big-endian machines, pointing to the wrong data size will almost always yield bogus results.
So out of curiosity, what's "wrong" with little-endian architectures that make big-endian ones better?
14
posted on
07/12/2002 10:17:58 PM PDT
by
supercat
To: JameRetief
Itanium has wholly failed as a desktop strategy....Itanium has NEVER been a desktop strategy. It was developed for the SERVER market usage where speed of information transfer is more important.
To: Pistolshot
The Merced/Itanium was developed for both server and desktop use. The reason Intel's 64 bit processors are not yet being promoted for the desktop is the lack of software (i.e. no 64 bit Windows OS). Intel intended to use the Itanium to begin transitioning the desktop market to 64 bit, but until the OS is available they will continue to concentrate on the server side.
Now, having said that, in the server market the Itanium has not faired well. They are having trouble selling them, and even Dell, the top PC seller for Intel, has decided not to use the Itanium II in their server systems.
To: supercat
Those are some interesting points about extended-precision data formats. I hadn't thought about those, but they make sense.
In my opinion, the worst disadvantage of little-endian format is handling bitmapped data (e.g. graphics buffers and masks) - the word orders must be rearranged with each access to get spatially coherent data. This is a performance issue, and one of the main reasons x86 processors are not preferred for graphically-intensive applications.
If I recall correctly, big-endian order can also perform better on longword shifting, symbol hashing, lookup tables, etc.
17
posted on
07/12/2002 10:57:19 PM PDT
by
HAL9000
To: JameRetief
[i]Cyrix, WinChip, Harris, and others are no longer in the X86 CPU business[/i]
Cyrix lives on in the National Semiconducter Geode processor line, The Winchip still soldiers on and is do very well as the basis of the Centaur C5 or as it is better known the VIA C3. However it was breifly known as the Cyrix III until VIA figured out that Cyrix wasn't a name associated with quality.
And Harris still manufacturers speciality IA processors such as the 386, 486 and liscensed 6x86,
I know because I use all of the above on a daily basis
To: Libertarianize the GOP; JameRetief
Thanks for the ping!
Good article!
To: HAL9000
In my opinion, the worst disadvantage of little-endian format is handling bitmapped data (e.g. graphics buffers and masks) - the word orders must be rearranged with each access to get spatially coherent data. This is a performance issue, and one of the main reasons x86 processors are not preferred for graphically-intensive applications. The problem with graphics is simply that whoever designed early video boards decided to make them clock out the most significant bit first. Whether this was because the early graphics board designs were for big-endian computers, or whether it was because it made the graphics look like their conventionally-printed binary or hex representation I don't know. My guess would be that early character-based designs clocked out the MSB of the character ROM first for the above reasons, and because the bit order used there didn't have any effect whatsoever on the rest of the system. When people first designed bitmap displays, they could easily have made them clock out LSB first but at the time there was no reason to do so.
BTW, if performance rather than compatibility is the goal, designing a graphics system to clock out LSB first would be trivial. Alternatively, the address counter could be changed to count in reverse sequence. It might seem a little odd to have 0,0 be the lower-right corner of the screen, but if the code is written for that I see no reason to expect any performance disadvantages to such a system.
20
posted on
07/12/2002 11:42:17 PM PDT
by
supercat
Navigation: use the links below to view more comments.
first 1-20, 21-22 next last
Disclaimer:
Opinions posted on Free Republic are those of the individual
posters and do not necessarily represent the opinion of Free Republic or its
management. All materials posted herein are protected by copyright law and the
exemption for fair use of copyrighted works.
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson