Discussion:
If Memory Had Been Cheaper
(too old to reply)
Quadibloc
2020-08-06 21:24:23 UTC
Permalink
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.

That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.

One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.

And the pocket calculator came to market *before* the 8-bit personal computer.

Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.

So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.

A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.

The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.

So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?

I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.

John Savard
Terry Kennedy
2020-08-06 23:59:40 UTC
Permalink
Post by Quadibloc
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
Core memory persisted well into the minicomputer era - the Data General Nova
and Eclipse all started out as core-only machines (the first memory I ever
bought was an 8K card for an Eclipse - it was 15" x 15" and cost $22,000 in
the currency of the day). Later Nova and Eclipse models used semiconductor
memory, which required a number of changes to the CPU boards (DG wanted to
sell a whole new processor board set, but I reverse engineered the changes
and "dead bug'd" ICs as needed). This was one of the first steps in incremen-
tally upgrading that system from an S/200 to an S/230 without buying any CPU
parts from DG. It was amusing to see the look on the service tech's face when
he'd say that a S/200 diagnostic failed and some part needed to be replaced,
and I'd say "you have to use the S/230 diagnostic".

Similarly, early PDP-11 systems used core memory (and the "boot ROM", if you
were lucky / rich enough to have one, was a quad board full of diodes that
you clipped out to create the desired pattern of 1's and 0's).

Both the DG and DEC systems I mention here were based on 74181 4-bit ALUs,
BTW.
Post by Quadibloc
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
Have you ever read through the 360 Principle of Operations manual? It is a
very dense read of everything necessary / not permitted / optional to have a
system that was a "360 family" machine. And since the 370 family was out by
the time you're talking about, you need to throw virtual memory into the mix
as well.

Add in the fact that IBM started copyrighting their software and you couldn't
just get a copy of DOS/VS (for example) and legally run it, and it is no wonder
that no mini or micro manufacturer wanted to get involved. Plus they'd need to
have 100% compatible peripherals or write their own drivers for each OS and keep
them up-to-date.

The "baby" 370, the 3115 Processor, actually used a bunch (5, IIRC) 801-ish
microprocessors to implement both the 370 instruction set and the I/O control-
lers. IBM also offered plug-in 370-ish processor boards for the IBM PC and AT -
the XT/370 and the AT/370. These were done with a custom Motorola 68K for the
CPU core and an Intel 8087 for floating point. These boards were "problem state"
(IBM's term for user application programs) compatible with the 370 instruction
set but not for privileged execution (supervisor state / operating system).
They needed a rather heavily modified version of VM/SP to run.

And even IBM got compatibility wrong on occasion. The 9370 processor line had
some undocumented discrepancies with other 370-family processors. That led to
IBM eventually taking the system back from where I was working because we re-
fused to sign the acceptance letter until it could run the IBM software sup-
plied with it properly, along with some other dumb bugs - if you flipped the
"Test" switch present on the front of every 3278 terminal on and off rapidly,
you would eventually crash the 9370. IBM's suggested "fix": Post signs in the
student computer labs saying "Please don't touch the test switch or you will
crash the computer". Right...
Post by Quadibloc
The other way around, BASIC interpreters could certainly include floating-
point subroutines... and, if desired, one could even have the UCSD P-System,
where an 8-bit micro is turned into a mainframe-like computer through the
use of an interpretive routine for a more powerful instruction set.
Been there, did that. The Pascal Microengine used one of 4 variants of the
WD16 chipset, which was an 8-bit processor that used code to emulate a 16-bit
CPU. The implementations (different microcode ROMs) were:

1) DEC LSI-11
2) Pascal Microengine
3) Alpha Micro
4) Unnamed processor used in internal Western Electric products

Oh, and I'm not sure where cost/type of memory and ALUs fit together in your
original post - seems like 2 different threads.

I have worked on systems with Williams[-Kilburn] tube memory and drum memory
as well as core, static and dynamic RAM. I'm not old enough to have used the
first 2 when they were current, but I have done restorations on systems that
used them.
Quadibloc
2020-08-07 00:22:23 UTC
Permalink
Post by Terry Kennedy
Have you ever read through the 360 Principle of Operations manual? It is a
very dense read of everything necessary / not permitted / optional to have a
system that was a "360 family" machine.
I should have made more clear that I was thinking of a microchip that was
microprogrammed to execute the kinds of instructions found on mainframes without
necessarily being a mainframe. Think of the RCA Spectra 70, which was compatible
with the 360 in respect of user programs, but which had much simpler I/O.

Or look at the SDS Sigma computers for an example of how you could compete with
the 360 while using much more primitive technology.

John Savard
Anne & Lynn Wheeler
2020-08-09 02:10:13 UTC
Permalink
Post by Terry Kennedy
The "baby" 370, the 3115 Processor, actually used a bunch (5, IIRC)
801-ish microprocessors to implement both the 370 instruction set and
the I/O control- lers. IBM also offered plug-in 370-ish processor
boards for the IBM PC and AT - the XT/370 and the AT/370. These were
done with a custom Motorola 68K for the CPU core and an Intel 8087 for
floating point. These boards were "problem state" (IBM's term for user
application programs) compatible with the 370 instruction set but not
for privileged execution (supervisor state / operating system). They
needed a rather heavily modified version of VM/SP to run.
Boeblingen got their hands slapped for 115/125. They had 9-position
memory bus for (up to) nine microprocessors. For the 115, all the
microprocessors were the same. For the 125, the microprocessor running
370 microcodee was 50% faster.

As undergraduate, I had done a lot of work on CP67 to reduce its fixed
real memory requirements to improve it running on 256kbyte 360/67 ...
including making parts of the kernel pageable (while lot of other stuff
I did as undergraduate shipped in standard CP67, pageable kernel pieces
didn't ship until VM370). In the morph from CP67->VM370, they simplified
and/or dropped a lot of stuff (including all my dynamic adaptive
resource management and scheduling). However, they still managed to
greatly bloat the vm370 fixed real memory (even with pieces of the
kernel pageable). It was so bloated that it wasn't even announced for
(370/125) 256kbyte memory. I get con'ed into getting vm370 running on
370/125 for a Scandinavian ship company (cutting back on the vm370 real
memory bloat).

I then get con'ed into design of 5-way 370/125 multiprocessor ... up to
five of the 125 microprocessors (which never announced/shipped) ... I
define a microcoded queued interface for dispatching and disk I/O ... VM
kernel puts things on the dispatching list ... but microprocessors pull
things off the dispatching list for execution ... when done ... they
place request block on queue for kernel execution. Do something similar
for disk i/o requests (disk controller, can pull things off for optimal
disk servicing throughput ... not FIFO).

About the same time I do the 5-way 370/125 effort, Endicott con's me
into doing a lot of work for 138/148 ECPS microcode. Told that 370
instructions drop into microcode on about a byte-for-byte basis with ten
times speed up. There is 6kbytes of available microcode storage and need
to identify the 6kbytes of highest kernel execution pathlengths. Old
archive post that identified the 6kbytes of highest executed kernel
pathlengths accounted for 79.55% of kernel exeuction time
http://www.garlic.com/~lynn/94.html#21

I also established that all the 138/148 ECPS changes could also be done
for the 125 5-way multiproceessor. However, then Endicott complained
that the 125 5-way multiprocessor would overlap the throughput of the
148. In the escalation meetings I had to do the arguments for both sides
of the table ... however corporate decided that the 125 5-way
multiprocessor wouldn't get announced.

Note that first half of the 70s, internally there was the Future System
project ... that was completely different than 370 and was going to
completely replace 370s original motivation was to signficantly raise
barrier for clone controllers (370 efforts were being killed off and the
lack of new 370 stuff during the period is credited with giving clone
370 processor makers market foothold). When FS imploded, there
was a made rush to get stuff back into 370 product pipelines ...
and quick & dirty 3033 and 3081 efforts were kicked off in parallel
some more details
http://www.jfsowa.com/computer/memo125.htm

Head of POK managed to convince corporate to kill off the vm370 product,
shutdown the vm370 group (burlington/mass), and transfer all the people
to POK to work on MVS/XA (or otherwise MVS/XA wouldn't be able to
ship on time). Eventually Endicott managed to save the vm370 product
mission, but had to reconstitute a VM370 development group from
scratch. Endicott also tried to have VM370 integrated into every
138/148 shipped (something like current LPAR) ... but they weren't
able to get that through corporate.

trivia: POK wasn't going to tell the VM370 group until the last minute
to minimize the number of people that manage to escape. The information
leaked and there was a witch hunt for who leaked the information
(fortunately for me, nobody leaked who it was). This was in the early
days of DEC VMS and one of the jokes is that the head of (IBM) POK was
one of the largest contributors to VMS.

By the 80s, there was significant more bloat for both VM370 and CMS.
They send me an early XT/370 to play with and do a lot of benchmarks
showing page thrashing with its 384kbyte 370 memory. Endicott blames me
for a six month slip in announce and ship to customers while they
upgrade 370 memory from 384kbyte to 512kbyte. The XT/370 processor
doesn't do any device I/O ... everything is interprocessor communication
with application running on 8088 ... doing I/O to the PC/XT devices. The
page thrashing and throughput was aggrevated by all CP paging & CMS file
I/O was done to the XT hard disk at 100ms per record. I also contributed
a page replacement algorithm that was more effective ... especially
in constrained memory environment.
--
virtualization experience starting Jan1968, online at home since Mar1970
Terry Kennedy
2020-08-09 04:07:05 UTC
Permalink
Post by Anne & Lynn Wheeler
Boeblingen got their hands slapped for 115/125. They had 9-position
memory bus for (up to) nine microprocessors. For the 115, all the
microprocessors were the same. For the 125, the microprocessor running
370 microcodee was 50% faster.
The 3125 was the only 370 system that I absolutely hated. There were various reasons, among them being:

* The MAI 3rd-party memory which was installed by guys in overalls with a sawzall, cutting a 4" x 4" hole in the CPU cabinet and filling it full of wire-wrap wires. The add-on memory had to be left offline during most of IMPL or the system would throw *BOTH* "CPU Early" and "CPU Late" errors. Then some magic needed to be done at a specific point during IMPL (something like a 15-second window) to enable the add-on memory.

* The console printer (as opposed to CRT on every other 370 I've had) was an oversized Selectric mechanism to handle pinfeed 14 7/8 x 11. It would jam all the time, and some interaction w/ DOS/VS Rel. 32, Power/VS and the hardware meant that if a system message came up while you had the cover open to clear the paper jam, the system would hang irrecoverably and yoo had to re-IPL (not re-IMPL, fortunately).

* This system came with the 2560 MFCM (Mother-Fluffing Card Mangler). I describe it as "2 input hoppers, 5 output hoppers, and a non-deterministic path between them". It also involved some flip-over, end-for-end and multiple 90-degree turns. It could take the better part of an hour to clear a particularly bad pile-up, and that was if we didn't have to get our CE to come out with the card saw. Another "feature" was that input and output hopper selection varied depending on what Power/VS partition your job was in. This resulted in student programs punching their output on other job decks instead of blank cards. After this we went back to the tried-and-true 2501 in read-only configuration and the 1442 in punch-only.
Peter Flass
2020-08-09 17:07:19 UTC
Permalink
Post by Anne & Lynn Wheeler
I then get con'ed into design of 5-way 370/125 multiprocessor ... up to
five of the 125 microprocessors (which never announced/shipped) ... I
define a microcoded queued interface for dispatching and disk I/O ... VM
kernel puts things on the dispatching list ... but microprocessors pull
things off the dispatching list for execution ... when done ... they
place request block on queue for kernel execution. Do something similar
for disk i/o requests (disk controller, can pull things off for optimal
disk servicing throughput ... not FIFO).
Was that you? Doesn’t VM use a “sweep” algorithm? Process all requests that
will keep the arm moving in one direction, then reverse and process all the
requests in the other.
Post by Anne & Lynn Wheeler
Head of POK managed to convince corporate to kill off the vm370 product,
shutdown the vm370 group (burlington/mass), and transfer all the people
to POK to work on MVS/XA (or otherwise MVS/XA wouldn't be able to
ship on time). Eventually Endicott managed to save the vm370 product
mission, but had to reconstitute a VM370 development group from
scratch. Endicott also tried to have VM370 integrated into every
138/148 shipped (something like current LPAR) ... but they weren't
able to get that through corporate.
It was obvious to customers that something was going on. The quality of
maintenance sucked. Later, when they tried to restart it was still bad
because so many people that knew anything were gone and there seemed to be
a lot of entry-level people.
--
Pete
Anne & Lynn Wheeler
2020-08-09 21:04:02 UTC
Permalink
Post by Peter Flass
Was that you? Doesn’t VM use a “sweep” algorithm? Process all requests that
will keep the arm moving in one direction, then reverse and process all the
requests in the other.
Original CP67 was FIFO, at the univ. as undergraduate in the 60s, I
changed it to ordered seek ... also did chained requests for paging in
one channel program (ordered by rotation, if didn't require seek)
... instead of separate channel program/SIO for every page (which was
one of the things retained for vm370, but a lot of other stuff was
dropped and/or at least drastically simplified).

for 370/125 ... the controller had real-time rotational position, there
was delay for rotation and delay for seek arm ... there were some
situations where could do a further seek delay might be less than closer
arm rotation delay ... it was possible because the whole I/O request
queue was exposed to the controller ... and could do real-time
reordering ... not only knowing the disk arm position but also the
real-time rotational position.
--
virtualization experience starting Jan1968, online at home since Mar1970
Anne & Lynn Wheeler
2020-08-09 22:03:06 UTC
Permalink
Post by Anne & Lynn Wheeler
Original CP67 was FIFO, at the univ. as undergraduate in the 60s, I
changed it to ordered seek ... also did chained requests for paging in
one channel program (ordered by rotation, if didn't require seek)
... instead of separate channel program/SIO for every page (which was
one of the things retained for vm370, but a lot of other stuff was
dropped and/or at least drastically simplified).
cp67 peaked around 80 page I/O transfers per second with 2301 fixed head
drums. with rotational chaining got it up to 270 page I/O transfers per
second (nine transfer per two rotations, drum formatted nine 4k pages
per pair of tracks with one of the 4k records spanning the end of one
track and the start of the next).

2301 & 2303 fixed head drums, except 2301 read/wrote on four heads in
parallel, 1/4 the number of "tracks", each track four times larger, and
four times the transfer rate of 2303 ... 60 revs/second, 9 page
transfers per pair of revolutions; (60/2)*9 = 270/sec.

CP67 had a special CHFREE function ... that was invoked by the
interrupt handler has soon as device handler got past initial
phase ... which drastically cut the device redrive latency
(for queued requests).

One of the things that got simplified in morph to VM370 ... queued
request redrive wasn't checked until previous device interrupt had been
completely handled ... significantly increasing latency for starting
queued requests.

After transfer from cambridge science center to san jose research in
later part of the 70s ... I got to wander around most IBM and customer
locations in silicon valley ... including bldg14 (disk engineering) and
bldg15 (disk product test) across the street from SJR. At the time 14/15
were running dedicated, prescheduled, stand-alone mainframe testing,
7x24. The had recently tried MVS (for some concurrent testing), but MVS
had 15min mean-time-between failure in that environment. I offerred to
rewrite input/output supervisor to make in bullet proof and never fail
... so they could do any amount of on-demand concurrent testing (creatly
improving productivity).

Downside was they started pointing the figure at my software when ever
there was a problem ... and I spent a lot of time playing disk engineer
shooting their hardware problems. I had also effectively reimplemented
CHFREE in VM370 ... significantly cutting redrive latency.

This turned up another problem in the new 3880 disk controller. While it
supported 3mbye/sec transfer with special hardware bypass ... everything
else was handled by a really slow JIB-prime processor (making everything
but actual data transfer much slower than 3830 controller that had a
fast horisontal microprogram processor). Trying to mask how slow the
3880 had become they tried to present end of channel program interrupt
... before the 3880 was actually done ... hoping that the extra
processing would be hidden between the 3880 queued the end-of-operation
interrupt and the time the system tried to redrive a new I/O.

Bldg15 product test got #2 or #3 operational engineering processor for
doing disk i/o channel testing and had the first 3033 outside POK and
the first 4341 outside Endicott. Since product channel i/o testing used
trivial amounts of CPU ... we put up private online service on the 3033
with a 3830 and two spare strings of 3330 (16) drives.

Early monday morning, I got irate call from bldg15 asking what I had
done to the online service software ... online response had horribly
deteriorated. They repeatedly denied making any change ... until I
tracked down that they had swapped the 3830 controller for a test 3880
controller. 3880 was presenting ending interrupt ... I was almost
immediately responding with SIOF for queued request, because 3880 was
still busy, it responded with cc=1, SM+BUSY (controller busy), and i had
to requeue the request and wait for the CUE (control unit busy end)
interrupt before retrying the request again. This was six months before
any 3880s shipped to customers and they came up with some 3880 microcode
changes that tried to do a better job of masking the problem.

I write an IBM internal only report about the work for bldg14&15 and
happen to mention the MVS 15min MTBF ... for which the MVS group
attempts to get me separated from the company ... we that fails, they
try and make my career in IBM irritable in other ways.

Note that 3090 had design number of channels based on total channel busy
for each channel assuming 3830 controller performance, However, when
they started real live testing ... they found 3880 drastically increased
channel busy for each operation ... and as a result 3090 had to
significantly increase the number of channels (trying to achieve desired
total system throughput). The increase in number of channels required an
extra TCM ... and 3090 product group semi-facetiously claimed that the
3880 product group had to credit 3090 group for the manufacturing cost
of the additional TCM for each 3090.

Note IBM marketing then respun the significant increase in number of
3090 channels as it being a marvelous I/O throughput machine (rather
than the increase in channels to compensate for the enormous increase in
channel busy caused by the slow 3880 controller).

Other channel trivia: In 1980, STL was bursting at the seams and
planning on moving 300 people from the IMS group to offsite bldg, with
dataprocessing service back to STL datacenter. The people had tried
remote 3270 support and found the human factors totally unacceptable.
I get con'ed into doing channel extender support so they can have
local channel attached 3270 controllers at the offsite bldg (weren't
able to see response difference between offsite and in STL).

Hardware vendor tries to get IBM to approve allowing them to ship my
support ... but there is group in POK playing with some serial stuff
that gets releasing my stuff vetoed (they were afraid that if it was in
the market, it would make it more difficult to ship their stuff).

In 1988, I'm asked to help LLNL standardize some serial stuff they are
playing with that quickly becomes Fibre Channel Standard (including some
of the stuff I had done in 1980). Then in 1990, the POK people get their
stuff released with ES/9000 as ESCON (when it is already obsolete). Then
some of the POK people become involved with Fibre Channel Standard
and define an extremely heavy weight protocol that drastically reduces
the native throughput ... which is eventually released as FICON. The
most recent mainframe PEAK IO benchmark I've found is Z196 that used 104
FICON to get 2M IOPS. At that time, there was a Fibre Channel announced
for E5-2600 blade claiming over million IOPS (two such Fibre Channel
have higher throughput than 104 FICON running over 104 Fibre Channel).
--
virtualization experience starting Jan1968, online at home since Mar1970
Peter Flass
2020-08-09 22:30:21 UTC
Permalink
Post by Anne & Lynn Wheeler
After transfer from cambridge science center to san jose research in
later part of the 70s ... I got to wander around most IBM and customer
locations in silicon valley ... including bldg14 (disk engineering) and
bldg15 (disk product test) across the street from SJR. At the time 14/15
were running dedicated, prescheduled, stand-alone mainframe testing,
7x24. The had recently tried MVS (for some concurrent testing), but MVS
had 15min mean-time-between failure in that environment. I offerred to
rewrite input/output supervisor to make in bullet proof and never fail
... so they could do any amount of on-demand concurrent testing (creatly
improving productivity).
Downside was they started pointing the figure at my software when ever
there was a problem ... and I spent a lot of time playing disk engineer
shooting their hardware problems. I had also effectively reimplemented
CHFREE in VM370 ... significantly cutting redrive latency.
You touch it, you own it.
--
Pete
J. Clarke
2020-08-07 14:24:37 UTC
Permalink
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
I don't recall exactly when but IBM actually did that some time before
the PC. I'd love to find the article in which they described it
again--I thought it was in Scientific American but I can't find it in
their archive.
=
t***@gmail.com
2020-08-07 18:26:03 UTC
Permalink
Post by J. Clarke
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
I don't recall exactly when but IBM actually did that some time before
the PC. I'd love to find the article in which they described it
again--I thought it was in Scientific American but I can't find it in
their archive.
=
Perhaps you are thinking of the IBM 5100?
(article in Dec. '75 Byte)
(look it up in Wikipedia)

Looks like it emulated both a 370 & a System/3.

- Tim
t***@gmail.com
2020-08-07 18:33:17 UTC
Permalink
Post by t***@gmail.com
Post by J. Clarke
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
I don't recall exactly when but IBM actually did that some time before
the PC. I'd love to find the article in which they described it
again--I thought it was in Scientific American but I can't find it in
their archive.
=
Perhaps you are thinking of the IBM 5100?
(article in Dec. '75 Byte)
(look it up in Wikipedia)
Looks like it emulated both a 370 & a System/3.
- Tim
BTW, memory was the priority at the time. I don't think you could have
encouraged more development. Thinking was you get a process cranking
out memory chips, then tweak it for logic. That is not really true, but
that is what companies were doing in the 70's. Memory got cheaper,
briefly, in the 80's when the Japanese were dumping on the market, but
that got stopped by the US Govt. mid 80s. Memory pricing was artificially
high until EDO memory came out.

- Tim
Terry Kennedy
2020-08-07 20:30:31 UTC
Permalink
Post by t***@gmail.com
BTW, memory was the priority at the time. I don't think you could have
encouraged more development. Thinking was you get a process cranking
out memory chips, then tweak it for logic. That is not really true, but
that is what companies were doing in the 70's. Memory got cheaper,
briefly, in the 80's when the Japanese were dumping on the market, but
that got stopped by the US Govt. mid 80s. Memory pricing was artificially
high until EDO memory came out.
Don't forget that there were 2 kinds of RAM semiconductor memory- static, which is basically an array of flip-flops and dynamic, which is basically an array of capacitors. In the late 70s I was designing systems with 70ns static memory. At 64KB per board, a 512KB system was quite a beast (running a heavily modified version of MP/M).

While the Z80 CPU had provisions for handling dynamic RAM refresh, it got complicated on systems with memory mapping and > 64KB.
Peter Flass
2020-08-07 18:39:40 UTC
Permalink
Post by t***@gmail.com
Post by J. Clarke
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
I don't recall exactly when but IBM actually did that some time before
the PC. I'd love to find the article in which they described it
again--I thought it was in Scientific American but I can't find it in
their archive.
=
Perhaps you are thinking of the IBM 5100?
(article in Dec. '75 Byte)
(look it up in Wikipedia)
That’s the one. For some reason I had a mental block preventing me from
finding it in Wikipedia.
--
Pete
Dennis Boone
2020-08-07 19:11:28 UTC
Permalink
Post by t***@gmail.com
Perhaps you are thinking of the IBM 5100?
(article in Dec. '75 Byte)
(look it up in Wikipedia)
Looks like it emulated both a 370 & a System/3.
IIRC the 5100 emulated those other systems only to the extent that it
could run APL\360 and the System/3 BASIC. I.e. it had a minimum
emulation of problem state.

De
Quadibloc
2020-08-07 19:35:25 UTC
Permalink
Post by Dennis Boone
Post by t***@gmail.com
Perhaps you are thinking of the IBM 5100?
(article in Dec. '75 Byte)
(look it up in Wikipedia)
Looks like it emulated both a 370 & a System/3.
IIRC the 5100 emulated those other systems only to the extent that it
could run APL\360 and the System/3 BASIC. I.e. it had a minimum
emulation of problem state.
Yes, you are correct.

John Savard
Quadibloc
2020-08-07 19:37:49 UTC
Permalink
Post by Quadibloc
Post by Dennis Boone
Post by t***@gmail.com
Perhaps you are thinking of the IBM 5100?
(article in Dec. '75 Byte)
(look it up in Wikipedia)
Looks like it emulated both a 370 & a System/3.
IIRC the 5100 emulated those other systems only to the extent that it
could run APL\360 and the System/3 BASIC. I.e. it had a minimum
emulation of problem state.
Yes, you are correct.
Except that it didn't run APL/360; it ran APLSV.

John Savard
J. Clarke
2020-08-08 17:23:54 UTC
Permalink
Post by Quadibloc
Post by Quadibloc
Post by Dennis Boone
Post by t***@gmail.com
Perhaps you are thinking of the IBM 5100?
(article in Dec. '75 Byte)
(look it up in Wikipedia)
Looks like it emulated both a 370 & a System/3.
IIRC the 5100 emulated those other systems only to the extent that it
could run APL\360 and the System/3 BASIC. I.e. it had a minimum
emulation of problem state.
Yes, you are correct.
Except that it didn't run APL/360; it ran APLSV.
Back in the '90s I almost got one of those. Was offered to me free.
If it had had APL on it I would have grabbed it instantly but it was
BASIC-only, drat it. Oh, well, one less item of clutter.
Anne & Lynn Wheeler
2020-08-09 02:29:31 UTC
Permalink
Post by Dennis Boone
IIRC the 5100 emulated those other systems only to the extent that it
could run APL\360 and the System/3 BASIC. I.e. it had a minimum
emulation of problem state.
https://en.wikipedia.org/wiki/IBM_5100

note error in the above ... it was done at the palo alto science center
(not los gatos lab) ... trivia: palo alto science center also did the
370/145 APL microcode assist (roughly ran apl applications at throughput
of 370/168 w/o microcode). topic drift ... nearly all low & mid range
360s & 370s implemented in native microcode in avg ten native microcode
instructions per 360/370 instruction. other drift: I had part of Los
Gatos wing with offices and labs.

similar but different, not far from palo alto science center (on page
mill), SLAC (on sandhill, in combination with CERN) did 168E in late
70s... hardware processor that implement problem state 370 for fortran
program execution with throughput of 370/168 ... placing at sensors
along the accelerator for initial data reduction. then in the early
80s, replaced/upgraded with 3081E
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3069.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3680.pdf
http://www.slac.stanford.edu/cgi-wrap/getdoc/slac-pub-3753.pdf
--
virtualization experience starting Jan1968, online at home since Mar1970
J. Clarke
2020-08-08 17:21:29 UTC
Permalink
Post by t***@gmail.com
Post by J. Clarke
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
I don't recall exactly when but IBM actually did that some time before
the PC. I'd love to find the article in which they described it
again--I thought it was in Scientific American but I can't find it in
their archive.
=
Perhaps you are thinking of the IBM 5100?
(article in Dec. '75 Byte)
(look it up in Wikipedia)
Looks like it emulated both a 370 & a System/3.
Might have been talking about the chip used, but it was a description
of the process of creating the chip, not related to any particular
product model.
Peter Flass
2020-08-07 18:31:57 UTC
Permalink
Post by J. Clarke
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
I don't recall exactly when but IBM actually did that some time before
the PC. I'd love to find the article in which they described it
again--I thought it was in Scientific American but I can't find it in
their archive.
I have a vague memory of this but can’t find a reference. What I recall
was a tabletop machine that could fun in two modes, one of which (I think)
was 360 instruction set. It predates the XT/360, etc, and was aimed at the
scientific or engineering market.
--
Pete
Joe Pfeiffer
2020-08-07 16:05:18 UTC
Permalink
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
Where was that article? I bought my last slide rule intended for actual
use (as opposed to many I've gotten since 'cause they're cool) as a high
school senior in 1975. That fall, the Melcor SC-535 scientific
calculator got cheap enough for a college freshman to buy (I want to say
it went sub-$100, but I'm not sure on that memory). By 1977 I was
teasing my Physics lab partner about being a Luddite since she was one
of the last people in the department to still have a slide rule.
Quadibloc
2020-08-08 21:23:19 UTC
Permalink
Post by Joe Pfeiffer
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
Where was that article? I bought my last slide rule intended for actual
use (as opposed to many I've gotten since 'cause they're cool) as a high
school senior in 1975. That fall, the Melcor SC-535 scientific
calculator got cheap enough for a college freshman to buy (I want to say
it went sub-$100, but I'm not sure on that memory). By 1977 I was
teasing my Physics lab partner about being a Luddite since she was one
of the last people in the department to still have a slide rule.
Here's a link:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2507986

As I was of modest means when I attended University...

In my first year, pocket calculators weren't a thing yet. I spent some money on
the Sterling Plastics duplex log-log slide rule, but for actual use, I think I
stuck to the Ricoh 121 bamboo Darmstadt I got as a Christmas present while
attending Junior High.

About a year after the Texas Instruments SR-50 came out, I was finally able to
get my first pocket calculator, a Microlith Scientific. This calculator, the
model 205, ran on one 9-volt battery, it had a vacuum fluorescent display (large
green numbers); while it had a two-digit exponent, it only calculated to eight
digits of precision, and the log and trig functions were only calculated to five
digits of precision. But I could get it for $25, which I could afford, and it
still was far more powerful than a slide rule.

I'm pretty sure that this calculator, enough to supersede the slide rule
completely, came out before inexpensive calculators were available from more
major companies - so I think that claiming it was the TI-30 that "killed" the
slide rule is too late, even if claiming the HP 35 did it was too early.

John Savard
undefined Hancock-4
2020-08-15 18:42:04 UTC
Permalink
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
Where was that article? I bought my last slide rule intended for actual
use (as opposed to many I've gotten since 'cause they're cool) as a high
school senior in 1975. That fall, the Melcor SC-535 scientific
calculator got cheap enough for a college freshman to buy (I want to say
it went sub-$100, but I'm not sure on that memory). By 1977 I was
teasing my Physics lab partner about being a Luddite since she was one
of the last people in the department to still have a slide rule.
Serious engineering and science students rushed out and got electronic calculators
when they became available in the early 1970s. _IF_ they could afford one.
They were not cheap. The rest of us mere mortals stuck with our slide rules.

I remember that for many years calculators were forbidden in exams as not
all students had them--too expensive. Slide rules were cheap. (Someone
mentioned the Sterling plastic units, I think only $3).
Quadibloc
2020-08-16 04:48:52 UTC
Permalink
Post by undefined Hancock-4
Serious engineering and science students rushed out and got electronic calculators
when they became available in the early 1970s. _IF_ they could afford one.
They were not cheap. The rest of us mere mortals stuck with our slide rules.
For a little while longer. The HP 35 came out in 1972 and it certainly wasn't
cheap.

The SR-50 came out in 1974, and it wasn't quite as expensive, but it wasn't
cheap either.

But by 1976, they were cheap. So while ordinary mortals had to wait for four
years, they did get their calculators eventually.

John Savard
JimP
2020-08-16 16:15:57 UTC
Permalink
On Sat, 15 Aug 2020 11:42:04 -0700 (PDT), undefined Hancock-4
Post by undefined Hancock-4
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
Where was that article? I bought my last slide rule intended for actual
use (as opposed to many I've gotten since 'cause they're cool) as a high
school senior in 1975. That fall, the Melcor SC-535 scientific
calculator got cheap enough for a college freshman to buy (I want to say
it went sub-$100, but I'm not sure on that memory). By 1977 I was
teasing my Physics lab partner about being a Luddite since she was one
of the last people in the department to still have a slide rule.
Serious engineering and science students rushed out and got electronic calculators
when they became available in the early 1970s. _IF_ they could afford one.
They were not cheap. The rest of us mere mortals stuck with our slide rules.
I remember that for many years calculators were forbidden in exams as not
all students had them--too expensive. Slide rules were cheap. (Someone
mentioned the Sterling plastic units, I think only $3).
I was working at a junior college in the late 1970s. They had two very
heavy calculators by I think Litton. Cost was $800 each. Purchased,
IIRC, about 5 years previous. I had bought an algebra and regular
function calculator for less than $40.

By 1977, that community college had stopped teaching its slide rule
class.

In 1978, I bought the TI SR-52 programable calculator for just under
$400. The next year the TI- 78, etc. came out. Half that price and had
twice as many programming steps. I think the most expensive of those
three could vary the number of programming lines at a sacrifice of
memory.
--
Jim
h***@bbs.cpcn.com
2020-08-17 19:36:17 UTC
Permalink
Post by JimP
On Sat, 15 Aug 2020 11:42:04 -0700 (PDT), undefined Hancock-4
Post by undefined Hancock-4
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
Where was that article? I bought my last slide rule intended for actual
use (as opposed to many I've gotten since 'cause they're cool) as a high
school senior in 1975. That fall, the Melcor SC-535 scientific
calculator got cheap enough for a college freshman to buy (I want to say
it went sub-$100, but I'm not sure on that memory). By 1977 I was
teasing my Physics lab partner about being a Luddite since she was one
of the last people in the department to still have a slide rule.
Serious engineering and science students rushed out and got electronic calculators
when they became available in the early 1970s. _IF_ they could afford one.
They were not cheap. The rest of us mere mortals stuck with our slide rules.
I remember that for many years calculators were forbidden in exams as not
all students had them--too expensive. Slide rules were cheap. (Someone
mentioned the Sterling plastic units, I think only $3).
I was working at a junior college in the late 1970s. They had two very
heavy calculators by I think Litton. Cost was $800 each. Purchased,
IIRC, about 5 years previous. I had bought an algebra and regular
function calculator for less than $40.
By 1977, that community college had stopped teaching its slide rule
class.
In 1978, I bought the TI SR-52 programable calculator for just under
$400. The next year the TI- 78, etc. came out. Half that price and had
twice as many programming steps. I think the most expensive of those
three could vary the number of programming lines at a sacrifice of
memory.
Note that in 1979 $40 was still serious money, maybe
roughly $150 today. Affordable, but not trivial.
Quadibloc
2020-08-18 09:04:29 UTC
Permalink
Post by JimP
In 1978, I bought the TI SR-52 programable calculator for just under
$400. The next year the TI- 78, etc. came out. Half that price and had
twice as many programming steps. I think the most expensive of those
three could vary the number of programming lines at a sacrifice of
memory.
Huh?

The SR-52 came out, and at $400 it was expensive. It saved programs on magnetic
cards, and thus was competing with the HP-65.

Texas Instruments' next programmable scientific calculator was the SR-56. That
was a _lot_ less expensive. I had one. No magnetic cards, and _fewer_ steps than
the SR-52.

*Two* years later (The SR-52 came out in fall 1975; the TI-58 and TI-59 came out in May, 1977) there were the TI-58 and TI-59. The TI-59 had magnetic cards; the
TI-58 had 480 programming steps, the TI-59 had 960 programming steps, and they
both used the same programming code. The later TI-66, a slim LCD calculator, was
also program-compatible.

Never heard of a TI-78 that came out the next year after the SR-52 and had more
programming steps.

John Savard
JimP
2020-08-19 12:53:09 UTC
Permalink
On Tue, 18 Aug 2020 02:04:29 -0700 (PDT), Quadibloc
Post by Quadibloc
Post by JimP
In 1978, I bought the TI SR-52 programable calculator for just under
$400. The next year the TI- 78, etc. came out. Half that price and had
twice as many programming steps. I think the most expensive of those
three could vary the number of programming lines at a sacrifice of
memory.
Huh?
The SR-52 came out, and at $400 it was expensive. It saved programs on magnetic
cards, and thus was competing with the HP-65.
Texas Instruments' next programmable scientific calculator was the SR-56. That
was a _lot_ less expensive. I had one. No magnetic cards, and _fewer_ steps than
the SR-52.
*Two* years later (The SR-52 came out in fall 1975; the TI-58 and TI-59 came out in May, 1977) there were the TI-58 and TI-59. The TI-59 had magnetic cards; the
TI-58 had 480 programming steps, the TI-59 had 960 programming steps, and they
both used the same programming code. The later TI-66, a slim LCD calculator, was
also program-compatible.
Never heard of a TI-78 that came out the next year after the SR-52 and had more
programming steps.
John Savard
I was grasping for model names. Ah well. I thought there were 3 models
in that series of 58 and 59. I guess not.

My SR-52 worked, until the card reading motor ran away with itself.
And the early rechargable batteries were no good.
--
Jim
Robert Swindells
2020-08-19 14:15:40 UTC
Permalink
Post by JimP
Post by Quadibloc
Post by JimP
In 1978, I bought the TI SR-52 programable calculator for just under
$400. The next year the TI- 78, etc. came out. Half that price and had
twice as many programming steps. I think the most expensive of those
three could vary the number of programming lines at a sacrifice of
memory.
Huh?
The SR-52 came out, and at $400 it was expensive. It saved programs on
magnetic cards, and thus was competing with the HP-65.
Texas Instruments' next programmable scientific calculator was the
SR-56. That was a _lot_ less expensive. I had one. No magnetic cards,
and _fewer_ steps than the SR-52.
*Two* years later (The SR-52 came out in fall 1975; the TI-58 and TI-59
came out in May, 1977) there were the TI-58 and TI-59. The TI-59 had
magnetic cards; the TI-58 had 480 programming steps, the TI-59 had 960
programming steps, and they both used the same programming code. The
later TI-66, a slim LCD calculator, was also program-compatible.
Never heard of a TI-78 that came out the next year after the SR-52 and
had more programming steps.
John Savard
I was grasping for model names. Ah well. I thought there were 3 models
in that series of 58 and 59. I guess not.
There were 3 models in that series, the other one was the TI-57, my father
had one.

<https://en.wikipedia.org/wiki/TI-57>

The wiki pages on the different models match what I remember.
JimP
2020-08-19 15:51:25 UTC
Permalink
On Wed, 19 Aug 2020 14:15:40 -0000 (UTC), Robert Swindells
Post by Robert Swindells
Post by JimP
Post by Quadibloc
Post by JimP
In 1978, I bought the TI SR-52 programable calculator for just under
$400. The next year the TI- 78, etc. came out. Half that price and had
twice as many programming steps. I think the most expensive of those
three could vary the number of programming lines at a sacrifice of
memory.
Huh?
The SR-52 came out, and at $400 it was expensive. It saved programs on
magnetic cards, and thus was competing with the HP-65.
Texas Instruments' next programmable scientific calculator was the
SR-56. That was a _lot_ less expensive. I had one. No magnetic cards,
and _fewer_ steps than the SR-52.
*Two* years later (The SR-52 came out in fall 1975; the TI-58 and TI-59
came out in May, 1977) there were the TI-58 and TI-59. The TI-59 had
magnetic cards; the TI-58 had 480 programming steps, the TI-59 had 960
programming steps, and they both used the same programming code. The
later TI-66, a slim LCD calculator, was also program-compatible.
Never heard of a TI-78 that came out the next year after the SR-52 and
had more programming steps.
John Savard
I was grasping for model names. Ah well. I thought there were 3 models
in that series of 58 and 59. I guess not.
There were 3 models in that series, the other one was the TI-57, my father
had one.
<https://en.wikipedia.org/wiki/TI-57>
The wiki pages on the different models match what I remember.
Ah, there were three of them. And they basically cost less than what I
paid for the SR-52. Didn't get them as I didn't need a calculator
anymore. Well, not for calculus which is why I bought the SR-52.
--
Jim
undefined Hancock-4
2020-08-15 18:39:14 UTC
Permalink
Post by Quadibloc
I was looking for information on K&E's abandoned prototype slide rule, the KE-Lon,
and found an article about how the demise of the slide rule was more gradual than
often acknowledged.
That may be, but it certainly was still more rapid than most other replacements of
older products by new technologies.
One thing that came to my mind was that, while it was true that machines like the
Wang 500 or the HP 9100A preceded the arrival of the pocket calculator, still,
while slide rule makers could perhaps have been expected to see the pocket
calculator coming _eventually_, the rapid pace of improvement in digital
electronics was not so obvious at the time.
I'm not so sure digital electronics was so rapid. In magazine ads from
the 1960s, digital electronic equipment was enormously expensive.
For instance, where there were electronic desk calculators, their
high cost limited them for years and the slow chuggy electro-mechanical
units remained in service.

It took a long time for electronics to come down enough in cost to
replace relays and mechanical units.
Post by Quadibloc
And the pocket calculator came to market *before* the 8-bit personal computer.
Computer memory chips are, of course, made using the same basic digital microchip
technology as computer processor chips. There are differences in the fabrication
processes used, so that memory designs can emphasize density over speed, but these
are variations on the same basic technology.
I am far from an expert on that, but I suspect the differences in electronics
of a calculator compared to a mainframe computer are pretty significant.
The IBM S/360 history goes into a lot of the issues involved.
Post by Quadibloc
So it's difficult to see how an alternate history could have happened in which CPU
technology developed more slowly, but memory technology, at least in terms of
density if not speed, developed more quickly.
A pocket calculator chip performs calculations on decimal floating-point numbers,
often including trig and log functions. So it performs pretty complex operations.
And there were also programmable calculators.
The operations performed by the instructions in even a mainframe computer like
the IBM System/360 weren't any more complex.
Others have commented that the instruction set of the S/360 was far more
complex than a calculator. There were many more options regarding
storage, addressing, data type, and I/O. Note how many ADD instructions
existed in the basic S/360.
Post by Quadibloc
So, even without a major change in CPU technology versus memory technology...
instead of 8-bit processors like the 8080 and 6800, why didn't some enterprising
chipmaker take a microchip with an 8-bit ALU, and, through microprogramming,
produce a chip that was similar to a System/360 Model 30 - a chip with
instructions to operate on 32-bit integers and 32-bit and 64-bit floating-point
numbers?
I don't think it was possible, especially in the technology of the time. Indeed,
the S/360-30, despite being a low end machine, was too expensive for
many prospective customers. IBM had to develop the model 20 series
and even that was too expensive. It took later electronic developments
for IBM to come out with the 'discount' System/3 product line, which had
a far simpler architecture (well described in bitsavers).

Again, I am far from an expert, but I think chips at the time for fairly
limited. Note that in 1967 they were still advertising classic vacuum tubes
for certain applications, such as color TV snivet control (whatever that is).
I think the ICs of the time were pretty limited. They didn't have a full
microprocessor yet, and wouldn't for a few years.
Post by Quadibloc
I suppose that the reason was one of efficiency and flexibility. A chip designed
that way wouldn't have been able to run with maximum efficiency on problems only
involving 8-bit integers; the microprogram layer would always be in the way. The
other way around, BASIC interpreters could certainly include floating-point
subroutines... and, if desired, one could even have the UCSD P-System, where an
8-bit micro is turned into a mainframe-like computer through the use of an
interpretive routine for a more powerful instruction set.
Quadibloc
2020-08-16 04:54:11 UTC
Permalink
Post by undefined Hancock-4
I am far from an expert on that, but I suspect the differences in electronics
of a calculator compared to a mainframe computer are pretty significant.
The IBM S/360 history goes into a lot of the issues involved.
There were differences. In order for a computer, even one with just a four-bit
ALU, to be small enough to be put in a pocket, and cheap enough to afford, the
cheapest, densest fabrication process was used to make complicated but slow
chips. So calculator chips at the time used PMOS, while early microprocessors
used the faster NMOS process.

Mainframe computers used TTL for a while longer, or even ECL, those processes
producing the fastest chips, but with higher power consumption.

These days, everything uses CMOS. It uses both types of MOS transistors, so it's
fundamentally as slow as PMOS - but because it uses less power, the circuits
could be shrunk, and that more than made up for it, so that today's chips run at
a few gigahertz.

John Savard
David Lesher
2020-08-16 21:44:14 UTC
Permalink
Post by undefined Hancock-4
Again, I am far from an expert, but I think chips at the time for fairly
limited. Note that in 1967 they were still advertising classic vacuum tubes
for certain applications, such as color TV snivet control (whatever that is).
I think the ICs of the time were pretty limited. They didn't have a full
microprocessor yet, and wouldn't for a few years.
FWIW:
AT&T Long Lines used circa 1930 K-Carrier {muliplexing} in revenue service
until 1975. It was CHOCK full of vacuum tubes, made by WECO of course.

I bought a HP-35 in 1972. Best move I ever made.
Loading...