Post by Thomas Koenig
A problem with scientific calculation today is that people often
cannot use 32 bit single precision (and even more often, they do
not bother to try) because of severe roundoff erros.
There was a good reason why the old IBM scientific machines had
36 bit floating point, but that was sacrificed on the altar of
the all-round system, the 360.
Why are eight-bit bytes so common today?
For a few more hours, due to a domain name issue that should be cleared up soon,
my web site is unavailable. Otherwise, I would point you to my web site, which
has a section talking about ways to make it practical for a computer to have 36-
bit floating-point numbers, just for the reasons you mention.
But to answer your question:
Before April 1964, and the IBM System/360, computers generally stored text in
the form of 6-bit characters.
This was fine - as long as you were content to have text that was upper-case
Now, there was a typesetting system built around the PDP-8, with 6-bit characters in a 12-bit word; you could always have a document like this:
\MY NAME IS \JOHN.
where the backslash is an escape character making the letter it precedes appear
in upper-case instead of lower-case.
And the IBM 360, while it had an 8-bit byte, had keypunches that were upper-case
only, and lower-case was a relatively uncommon extra-cost option on their video
terminals; however, the 2741 printing terminal, based on the Selectric
typewriter, offered lower-case routinely. However, the 360 was mostly oriented
around punched-card batch.
For that matter, the Apple II with an 8-bit byte had an upper-case only
keyboard, and there was a word-processor for it that let you use an escape
character if you wanted mixed case.
So byte size isn't everything. But the 8-bit byte does make lower-case text a
lot easier to process. That is _one_ important reason why, when the hugely
popular (for other reasons) IBM System/360 came out, everybody started looking
upon computers with 6-bit characters as something old-fashioned.
IBM, when it designed the 360, wanted a very flexible machine, suitable to a
wide variety of applications. The intention was that microcode would allow big
and small computers to have the same instruction set, and this would serve both
business computers sending out bills with their computers, and universities
doing scientific research with them.
Because 32 bits was smaller than 36 bits - and IBM made things worse by using a
hexadecimal exponent, and by truncating instead of rounding floating-point
calculations - people doing scientific calculations simply switched to using
double precision for everything.
The IBM 360 was very popular. SDS, later purchased by Xerox, made a machine
called the Sigma which, although not compatible with the 360, used similar data
formats, and which was designed to perform the same types of calculations. (In
some ways, it was a simpler design aimed at allowing 360 workloads to be handled
with 7090-era technology.) And there was the Spectra series of computers from
RCA, which were partly 360 compatible.
But the effects of the 360 were felt everywhere. The PDP-4 had an 18 bit word, but minicomputers from other companies, like the HP 211x series or the Honeywell 316, went to a 16 bit word. And DEC, which had minis with 12-bit and 18-bit words with the same basic design as the HP 211x and Honeywell 316 - single-word memory-reference instructions were achieved by allowing them to refer to locations on "page zero" and the *current* page, with indirect addressing used to get at anywhere else - decided it needed something modern too.
And so DEC came up with the *wildly successful* PDP-11. It was a minicomputer
with a 16-bit word. But unlike the minicomputers I've just mentioned, it had a
modern architecture. The only indirect addressing was register indirect
addressing. Memory wasn't divided into pages.
The PDP-11 transformed the world of computing. It solidified the dominance of
the 8-bit byte. It also made the "little-endian" byte order a common choice.