Quadibloc
2020-08-06 21:24:23 UTC
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
John Savard
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
John Savard