Quadibloc
2023-07-09 02:40:12 UTC
The HP 9845 has been called the world's first workstation computer.
However, when I saw descriptions of its internals, I thought that must be an
exaggeration. Like many programmable calculators from HP, it used a CPU
derived from the HP 211x minicomputer, and it did BCD arithmetic.
And those programmable calculators did floating-point arithmetic one digit at
a time, basically at pocket calculator speed. There's no way a CPU that anemic
could drive anything you could call a "workstation", not even by the standards
of 1978.
So I dug deeper.
On the other systems I was thinking of, the HP 211x-like processor did the
floating-point arithmetic itself, with instructions to handle single BCD digits.
The 9845, and related systems like the 9825, instead had a special chip, the
Extended Math Chip, that did the floating-point. Hey, with a name like EMC, it's
gotta be smart?
And internally, floats had exponents running from -511 to +511. Okay, so
the exponent is a binary integer, not two BCD digits. That will speed things
up a little.
But then I found the smoking gun.
The EMC, among its circuitry, contained a *16 x 16 binary multiplier*.
I guess that would help with multiplying. But wouldn't converting from
decimal to binary and back again every time you multiplied still be kind of
slow?
Maybe there's some fancy algorithm to do BCD multiplication on a binary
multiplier that I don't know about. But I _do_ know about another way to
sidestep the issue entirely.
While you get to represent *different* numbers as fractions if you use
decimal digits, or binary digits, or some other kind of digits... integers are
integers, and you get to represent the same set of integers, just to a
different upper limit, no matter what kind of digit you use.
So, 'way back when, John von Neumann represented floating-point
numbers as pairs of binary integers - one was the mantissa or significand,
as an integer, not a fraction, and the other was the value of a *power of
ten* by which it was multiplied.
That way, people wouldn't complain that 0.1 plus 0.2 worked out to be
0.299999...; the arithmetic would do what users expected.
Intel uses this very same technique today in its implementation of the
new Decimal Floating-Point standard, because it didn't feel it was worth
putting the same amount of transistors for BCD arithmetic on their mass-
market chips as IBM was doing on its z/Architecture mainframes.
I therefore am announcing that at this time, I suspect that the 9845 and
related systems also used this technique, in order that they could make
the most effective use of that 16 x 16 binary multiplier they put on the
EMC, and to provide high-performance arithmetic that still mimicked the
unsurprising behavior of BCD arithmetic in calculators.
John Savard
However, when I saw descriptions of its internals, I thought that must be an
exaggeration. Like many programmable calculators from HP, it used a CPU
derived from the HP 211x minicomputer, and it did BCD arithmetic.
And those programmable calculators did floating-point arithmetic one digit at
a time, basically at pocket calculator speed. There's no way a CPU that anemic
could drive anything you could call a "workstation", not even by the standards
of 1978.
So I dug deeper.
On the other systems I was thinking of, the HP 211x-like processor did the
floating-point arithmetic itself, with instructions to handle single BCD digits.
The 9845, and related systems like the 9825, instead had a special chip, the
Extended Math Chip, that did the floating-point. Hey, with a name like EMC, it's
gotta be smart?
And internally, floats had exponents running from -511 to +511. Okay, so
the exponent is a binary integer, not two BCD digits. That will speed things
up a little.
But then I found the smoking gun.
The EMC, among its circuitry, contained a *16 x 16 binary multiplier*.
I guess that would help with multiplying. But wouldn't converting from
decimal to binary and back again every time you multiplied still be kind of
slow?
Maybe there's some fancy algorithm to do BCD multiplication on a binary
multiplier that I don't know about. But I _do_ know about another way to
sidestep the issue entirely.
While you get to represent *different* numbers as fractions if you use
decimal digits, or binary digits, or some other kind of digits... integers are
integers, and you get to represent the same set of integers, just to a
different upper limit, no matter what kind of digit you use.
So, 'way back when, John von Neumann represented floating-point
numbers as pairs of binary integers - one was the mantissa or significand,
as an integer, not a fraction, and the other was the value of a *power of
ten* by which it was multiplied.
That way, people wouldn't complain that 0.1 plus 0.2 worked out to be
0.299999...; the arithmetic would do what users expected.
Intel uses this very same technique today in its implementation of the
new Decimal Floating-Point standard, because it didn't feel it was worth
putting the same amount of transistors for BCD arithmetic on their mass-
market chips as IBM was doing on its z/Architecture mainframes.
I therefore am announcing that at this time, I suspect that the 9845 and
related systems also used this technique, in order that they could make
the most effective use of that 16 x 16 binary multiplier they put on the
EMC, and to provide high-performance arithmetic that still mimicked the
unsurprising behavior of BCD arithmetic in calculators.
John Savard