Free Republic
Browse · Search
News/Activism
Topics · Post Article

Skip to comments.

[Intel] "New" Celeron dictated by marchitecture, not architecture
The Inquirer ^ | July 11, 2002 | Andrew Busigin

Posted on 07/12/2002 3:15:33 PM PDT by JameRetief

click here to read article


Navigation: use the links below to view more comments.
first 1-2021-22 next last

1 posted on 07/12/2002 3:15:33 PM PDT by JameRetief
[ Post Reply | Private Reply | View Replies]

To: SlickWillard
Ping
2 posted on 07/12/2002 3:33:46 PM PDT by KayEyeDoubleDee
[ Post Reply | Private Reply | To 1 | View Replies]

To: JameRetief
Celerons were intentionally crippled...

That's not entirely true, at least for the second set of Celerons - the ones running faster than 300 MHz. A great many of them were P-III's that failed their L2 cache test.

3 posted on 07/12/2002 3:48:31 PM PDT by general_re
[ Post Reply | Private Reply | To 1 | View Replies]

To: JameRetief
Never trust a little-endian CPU.
4 posted on 07/12/2002 4:27:35 PM PDT by HAL9000
[ Post Reply | Private Reply | To 1 | View Replies]

To: HAL9000
What difference does that make?
5 posted on 07/12/2002 5:14:54 PM PDT by Paleo Conservative
[ Post Reply | Private Reply | To 4 | View Replies]

To: JameRetief
Celeron: The Ford Escort of CPUs.
6 posted on 07/12/2002 5:16:56 PM PDT by Timesink
[ Post Reply | Private Reply | To 1 | View Replies]

To: Paleo Conservative
Little-endian architecture is a relic of the 70s that was intended to maintain a degree of compatibility with 8-bit processors. Now it is just an ugly, inefficient kludge.

Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard.

7 posted on 07/12/2002 5:40:27 PM PDT by HAL9000
[ Post Reply | Private Reply | To 5 | View Replies]

To: *tech_index; Ernest_at_the_Beach
.
8 posted on 07/12/2002 5:50:49 PM PDT by Libertarianize the GOP
[ Post Reply | Private Reply | To 1 | View Replies]

To: HAL9000
Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard.

With IA-64 (Itanium/Itanium2), Intel has provided hardware level mechanisms to specify which endian model should be used when accessing data, either big-endian or little-endian.

However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.

9 posted on 07/12/2002 6:19:01 PM PDT by Lorenb420
[ Post Reply | Private Reply | To 7 | View Replies]

To: Lorenb420
With IA-64 (Itanium/Itanium2), Intel has provided hardware level mechanisms to specify which endian model should be used when accessing data, either big-endian or little-endian.

That's useful for emulation purposes. The PowerPC has similar features, but normally operates in big-endian mode.

However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.

Yuck.

10 posted on 07/12/2002 6:31:10 PM PDT by HAL9000
[ Post Reply | Private Reply | To 9 | View Replies]

To: JameRetief
I'm a big fan of AMD's chips' performance and view them as generally at least the equal of Intel's in that regard.

For mission-critical applications, however, the robustness of Intel's chips in the face of bad stuff happening is significantly ahead of AMD's.
11 posted on 07/12/2002 6:51:50 PM PDT by RightOnTheLeftCoast
[ Post Reply | Private Reply | To 1 | View Replies]

To: JameRetief
I read somewhere that Intel had a breakthrough and is running 10-gigahertz chips in the lab with no heating problem.

If true, and they can get the technology to market, AMD could be facing a few more bruises.

--Boris

12 posted on 07/12/2002 8:36:07 PM PDT by boris
[ Post Reply | Private Reply | To 1 | View Replies]

To: JameRetief
What is Intel going to do when 80 or 90 percent of their business is desktop computing? They can't seem to get itanium accepted in mainstream desktop computing. One thing that intel excels at... chip fabrication. Wouldn't it be funny if they eventually got degraded into being an AMD chip fab ;) Or worst... an AMD cloner. ahaHAHA!
13 posted on 07/12/2002 8:49:27 PM PDT by aSkeptic
[ Post Reply | Private Reply | To 1 | View Replies]

To: HAL9000
Little-endian architecture is a relic of the 70s that was intended to maintain a degree of compatibility with 8-bit processors. Now it is just an ugly, inefficient kludge.

I am aware of precisely two advantages of big-endian architectures:

These benefits are counterbalanced by a couple of significant benefits of little-endian architecture: So out of curiosity, what's "wrong" with little-endian architectures that make big-endian ones better?
14 posted on 07/12/2002 10:17:58 PM PDT by supercat
[ Post Reply | Private Reply | To 7 | View Replies]

To: JameRetief
Itanium has wholly failed as a desktop strategy....

Itanium has NEVER been a desktop strategy. It was developed for the SERVER market usage where speed of information transfer is more important.

15 posted on 07/12/2002 10:26:22 PM PDT by Pistolshot
[ Post Reply | Private Reply | To 1 | View Replies]

To: Pistolshot
The Merced/Itanium was developed for both server and desktop use. The reason Intel's 64 bit processors are not yet being promoted for the desktop is the lack of software (i.e. no 64 bit Windows OS). Intel intended to use the Itanium to begin transitioning the desktop market to 64 bit, but until the OS is available they will continue to concentrate on the server side.

Now, having said that, in the server market the Itanium has not faired well. They are having trouble selling them, and even Dell, the top PC seller for Intel, has decided not to use the Itanium II in their server systems.
16 posted on 07/12/2002 10:53:12 PM PDT by JameRetief
[ Post Reply | Private Reply | To 15 | View Replies]

To: supercat
Those are some interesting points about extended-precision data formats. I hadn't thought about those, but they make sense.

In my opinion, the worst disadvantage of little-endian format is handling bitmapped data (e.g. graphics buffers and masks) - the word orders must be rearranged with each access to get spatially coherent data. This is a performance issue, and one of the main reasons x86 processors are not preferred for graphically-intensive applications.

If I recall correctly, big-endian order can also perform better on longword shifting, symbol hashing, lookup tables, etc.

17 posted on 07/12/2002 10:57:19 PM PDT by HAL9000
[ Post Reply | Private Reply | To 14 | View Replies]

To: JameRetief
[i]Cyrix, WinChip, Harris, and others are no longer in the X86 CPU business[/i]

Cyrix lives on in the National Semiconducter Geode processor line, The Winchip still soldiers on and is do very well as the basis of the Centaur C5 or as it is better known the VIA C3. However it was breifly known as the Cyrix III until VIA figured out that Cyrix wasn't a name associated with quality.

And Harris still manufacturers speciality IA processors such as the 386, 486 and liscensed 6x86,

I know because I use all of the above on a daily basis
18 posted on 07/12/2002 11:06:25 PM PDT by ContentiousObjector
[ Post Reply | Private Reply | To 1 | View Replies]

To: Libertarianize the GOP; JameRetief
Thanks for the ping!

Good article!

19 posted on 07/12/2002 11:15:51 PM PDT by Ernest_at_the_Beach
[ Post Reply | Private Reply | To 8 | View Replies]

To: HAL9000
In my opinion, the worst disadvantage of little-endian format is handling bitmapped data (e.g. graphics buffers and masks) - the word orders must be rearranged with each access to get spatially coherent data. This is a performance issue, and one of the main reasons x86 processors are not preferred for graphically-intensive applications.

The problem with graphics is simply that whoever designed early video boards decided to make them clock out the most significant bit first. Whether this was because the early graphics board designs were for big-endian computers, or whether it was because it made the graphics look like their conventionally-printed binary or hex representation I don't know. My guess would be that early character-based designs clocked out the MSB of the character ROM first for the above reasons, and because the bit order used there didn't have any effect whatsoever on the rest of the system. When people first designed bitmap displays, they could easily have made them clock out LSB first but at the time there was no reason to do so.

BTW, if performance rather than compatibility is the goal, designing a graphics system to clock out LSB first would be trivial. Alternatively, the address counter could be changed to count in reverse sequence. It might seem a little odd to have 0,0 be the lower-right corner of the screen, but if the code is written for that I see no reason to expect any performance disadvantages to such a system.

20 posted on 07/12/2002 11:42:17 PM PDT by supercat
[ Post Reply | Private Reply | To 17 | View Replies]


Navigation: use the links below to view more comments.
first 1-2021-22 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
News/Activism
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson