To: Paleo Conservative
Little-endian architecture is a relic of the 70s that was intended to maintain a degree of compatibility with 8-bit processors. Now it is just an ugly, inefficient kludge.
Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard.
7 posted on
07/12/2002 5:40:27 PM PDT by
HAL9000
To: HAL9000
Intel knows that little-endian architecture is bad, and would like to abandon it - but Microsoft insists that it remains the standard. With IA-64 (Itanium/Itanium2), Intel has provided hardware level mechanisms to specify which endian model should be used when accessing data, either big-endian or little-endian.
However all IA-64 instruction fetches are performed little-endian regardless of current endian mode.
9 posted on
07/12/2002 6:19:01 PM PDT by
Lorenb420
To: HAL9000
Little-endian architecture is a relic of the 70s that was intended to maintain a degree of compatibility with 8-bit processors. Now it is just an ugly, inefficient kludge. I am aware of precisely two advantages of big-endian architectures:
- Memory dumps may be read out more nicely than on little-endian architectures, though the concept of a 'memory dump' is in many cases practically obsolete for anyone but die-hard codeheads, for whom little-endian dumps shouldn't pose too much of a problem.
- Early-exit extended-precision magnitude comparisons may be performed more easily on big-endian than little-endian processors.
These benefits are counterbalanced by a couple of significant benefits of little-endian architecture:
- Extended-precision addition and subtraction may be performed more readily than with big-endian architectures, since it's necessary to generate the carries from the lower stages before operating on the upper ones. Note that except with very large numbers of what are expected to be widely varying magnitudes, doing a numerical comparison via an extended-precision subtraction will be faster than using the early-exit method on a big-endian processor.
- A pointer to a data item may be used to access that data item as any smaller size. While such techniques must be used with caution, they can sometimes aid in migration of code from shorter-word CPU's to larger ones. As a simple example, suppose I have a function which operates on a 16-bit integer pointed to by its argument. If such a function is passed a pointer to a 32-bit integer, it will still work correctly provided the integer's correct value starts and remains in the range -32768 to 32767. By contrast, on big-endian machines, pointing to the wrong data size will almost always yield bogus results.
So out of curiosity, what's "wrong" with little-endian architectures that make big-endian ones better?
14 posted on
07/12/2002 10:17:58 PM PDT by
supercat
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson