Post by gareth
I beg to differ on one point, and that is having been brought up on the
PDP11 and its little-endian scheme, to me that is the logical way of
preogrssing, because with multi-byte (or multi-word) integer arithmetic
the offset of succeeding byte increases according to the appropriate
power of 2.
Any difficulties perceived in reading off hexadecimal are trivially
covered by a simple program.
Consistent little-endian operation certainly is logical; but it's
confusing to people who were brought up reading and writing in English
or other languages written from left to right.
When one reads Hebrew or Arabic from right to left, one encounters the
digits of a number in the little-endian fashion, the least significant
The problem isn't that it's hard to read a core dump. The problem is
that conceptually it's hard to remember where certain things should
be. In the spec for a block cipher, when it's enciphering a sequence
of ASCII characters transmitted on a communications channel, where
does the first character go?
In a big-endian world, the first character automatically goes in the
first part of the binary 64-bit number that is operated on... there's
no need to read over the spec carefully and hunt for the fine print.
Little-endian bit numbering makes sense for circuit boards that attach
a 12-bit A/D converter to a 16-bit bus, for example, and little-endian
storage of multi-word numbers makes multi-precision arithmetic
simpler. Both systems have their advantages.
What I personally think is the decisive advantage in favor of big-
endian, which is why I wish the PDP-11 never introduced the little-
endian alternative, is that it's pedagogically simpler; it lets people
new to computers be taught to understand how computers work at the
machine language hardware level more quickly by removing an
opportunity for confusion.
And big-endian is generally preferable if a computer has a decimal
data type, for the purpose of doing a few short quick arithmetic
computations on numbers that are stored in text form in order to keep
the contents of a data file or database human-readable.
That's why the IBM 360, designed to replace both the scientific 7090
and the commercial 1401, was big-endian. That, and the fact that the
alternative likely wasn't even considered (before the PDP-11, many
computers did store the two words of things like double-precision
integers in little-endian form, but characters within a word were,
AFAIK, always big-endian) as a possibility, of course.
Little-endian isn't bad - it is logical and consistent. But it makes
life more complicated; as the newcomer, it is the one that created a
headache of incompatibility.