Discussion:
IBM system/360 ad
Add Reply
h***@bbs.cpcn.com
2019-11-25 21:20:51 UTC
Reply
Permalink
https://archive.org/details/Nations-Business-1964-12/page/n51

Available with up to 8 meg of memory!

(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
Quadibloc
2019-11-25 23:07:31 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
Yes, that's right. And 16 megabytes of slow core could be attached to the
largest models in the System/360 line... which weren't available yet at the time
of that advertisement.

John Savard
Charlie Gibbs
2019-11-26 01:22:02 UTC
Reply
Permalink
Post by Quadibloc
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
Yes, that's right. And 16 megabytes of slow core could be attached to the
largest models in the System/360 line... which weren't available yet at
the time of that advertisement.
Besides, hardly anyone would have been able to afford that much core.
I remember seeing an article in a trade rag around 1971 or so about
how IBM rocked the industry by slashing the price of a megabyte of
memory from $75,000 to a mere $15,000.

(Meanwhile, I'm slipping into my pocket a thumb drive that cost me
50 cents per gigabyte...)
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
h***@bbs.cpcn.com
2019-11-27 20:54:17 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by Quadibloc
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
Yes, that's right. And 16 megabytes of slow core could be attached to the
largest models in the System/360 line... which weren't available yet at
the time of that advertisement.
Besides, hardly anyone would have been able to afford that much core.
I remember seeing an article in a trade rag around 1971 or so about
how IBM rocked the industry by slashing the price of a megabyte of
memory from $75,000 to a mere $15,000.
In those days we talked in terms of K, not meg. We were happy
to have 192K on our S/360. I think the Univac 90/30 I worked
on had 256k, which we thought was very large.

But as the 1970s wore on, technology improved and memory
got cheaper. Of course, at the same time, software got
bloated.
Post by Charlie Gibbs
(Meanwhile, I'm slipping into my pocket a thumb drive that cost me
50 cents per gigabyte...)
Yes, I got a Sandisk flash drive with 32GB for $6.
Amazing. (Though this one doesn't have the little red light
that glows when in use. I wish it did. Also, I don't know
if that is 'real' memory or compressed memory.)
Charlie Gibbs
2019-11-28 18:08:29 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
On Monday, November 25, 2019 at 2:20:53 PM UTC-7,
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
Yes, that's right. And 16 megabytes of slow core could be attached
to the largest models in the System/360 line... which weren't
available yet at the time of that advertisement.
Besides, hardly anyone would have been able to afford that much core.
I remember seeing an article in a trade rag around 1971 or so about
how IBM rocked the industry by slashing the price of a megabyte of
memory from $75,000 to a mere $15,000.
In those days we talked in terms of K, not meg. We were happy
to have 192K on our S/360. I think the Univac 90/30 I worked
on had 256k, which we thought was very large.
It was. I don't know whether I ever used a 256K 90/30 - most
of them around here had 192K. Some had only 128K, which was
uncomfortably tight.

Univac finally broke down and said you needed at least 64K (rather
than 32K) to run a minimum IMS/90 installation (their equivalent
of CICS). Mind you, that was three terminals making inquiries
into a single ISAM file, i.e. pretty useless.

(Actually, they said 65K, not 64K, since these statements came
from the marketroids, who live in that magical world where
32 + 32 = 65. The customers would only find out that the
configuration they purchased on a low-balled bid was inadequate
once they tried to get it running, at which point it was cheaper
to just buy an upgrade than go somewhere else.)
Post by h***@bbs.cpcn.com
But as the 1970s wore on, technology improved and memory
got cheaper. Of course, at the same time, software got
bloated.
Yup. Consider that 10,000 times as much memory is now considered
barely adequate in a garden-variety home computer.
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
(Meanwhile, I'm slipping into my pocket a thumb drive that cost me
50 cents per gigabyte...)
Yes, I got a Sandisk flash drive with 32GB for $6.
Amazing. (Though this one doesn't have the little red light
that glows when in use. I wish it did. Also, I don't know
if that is 'real' memory or compressed memory.)
I really want that red light. I never pull a thumb drive out
of the socket until the light stops flashing, no matter what
the software says.
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
Ahem A Rivet's Shot
2019-11-28 09:44:32 UTC
Reply
Permalink
On 26 Nov 2019 01:22:02 GMT
Post by Charlie Gibbs
(Meanwhile, I'm slipping into my pocket a thumb drive that cost me
50 cents per gigabyte...)
Long range data bandwidth has done similar things. Early 1980s you
could struggle to get a reliable 1200bps long distance connection and pay
through the nose for it, or get a reliable 56/64k connection and pay by the
limb for it. Mid 1990s here a typical dial up ISP offered cheap 14.4k
connections (plus not so cheap phone charges) out of an expensive 64K pipe
and there were only a handful in the (admittedly small) country.

Today there's an uncapped, flat-rate, gigabit pipe to my house
running 24/7 that often provides tens of megabytes per second on a
transatlantic connection, what's more all the connection hardware was
installed free of charge.

I do miss Morten posting on the stunts being pulled to make that
happen.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Peter Flass
2019-11-28 18:04:19 UTC
Reply
Permalink
Post by Ahem A Rivet's Shot
On 26 Nov 2019 01:22:02 GMT
Post by Charlie Gibbs
(Meanwhile, I'm slipping into my pocket a thumb drive that cost me
50 cents per gigabyte...)
Long range data bandwidth has done similar things. Early 1980s you
could struggle to get a reliable 1200bps long distance connection and pay
through the nose for it, or get a reliable 56/64k connection and pay by the
limb for it. Mid 1990s here a typical dial up ISP offered cheap 14.4k
connections (plus not so cheap phone charges) out of an expensive 64K pipe
and there were only a handful in the (admittedly small) country.
Today there's an uncapped, flat-rate, gigabit pipe to my house
running 24/7 that often provides tens of megabytes per second on a
transatlantic connection, what's more all the connection hardware was
installed free of charge.
I do miss Morten posting on the stunts being pulled to make that
happen.
Yes :-(
--
Pete
Anne & Lynn Wheeler
2019-11-26 00:41:31 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
s/360 & s/370 architecture only having 24bit (16mbyte) addressing (real
& virtual) ... except for 360/67 which still had "real" 24bit addressing
but support 32bit virtual. Note that was the architecture ... specific
models might have much smaller number of hardware address lines
... limiting the actual amount of memory that could be attached.

by the time 3033 came around ... 16mbyte was huge constraint because of
the excessively bloated MVS kernel size as well concurrent
multiprogramming needed to try and keep system busy ... lots of page
thrashing. They did a kludge for 64mbyte support. There were two
undefined/unused bits in each page table entry (mapping virtual address
to a hardware address). The used the two unused bits to prepend to the
12bit page number (allowing 64mbytes worth of 4kbyte pages ... rather
than just 16mbytes)

Instructions (virtual and real) could only form a 24bit address ... but
a virtual address (using the page table entry hack) could map to a
64mbyte hardware address ... aka hacked 3033 that had internal 26bit
hardware address lines.

complimenting the 64mbyte page table hack ... in the original 370
architecture was fullword (31bit architecture) channel program IDALs
... which could be used to generate 64mbyte I/O transfer addresses.
--
virtualization experience starting Jan1968, online at home since Mar1970
Scott Lurndal
2019-11-26 01:31:31 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
by the time 3033 came around ... 16mbyte was huge constraint because of
the excessively bloated MVS kernel size as well concurrent
multiprogramming needed to try and keep system busy ... lots of page
thrashing. They did a kludge for 64mbyte support. There were two
undefined/unused bits in each page table entry (mapping virtual address
to a hardware address). The used the two unused bits to prepend to the
12bit page number (allowing 64mbytes worth of 4kbyte pages ... rather
than just 16mbytes)
Instructions (virtual and real) could only form a 24bit address ... but
a virtual address (using the page table entry hack) could map to a
64mbyte hardware address ... aka hacked 3033 that had internal 26bit
hardware address lines.
Intel did something similar later in the P6 days to support 36-bit physical
addresses with 32-bit virtual addresses.
h***@bbs.cpcn.com
2019-11-27 20:49:51 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
s/360 & s/370 architecture only having 24bit (16mbyte) addressing (real
& virtual) ... except for 360/67 which still had "real" 24bit addressing
but support 32bit virtual. Note that was the architecture ... specific
models might have much smaller number of hardware address lines
... limiting the actual amount of memory that could be attached.
Anyone know how much real memory does the Z series support today?

Unlike the past, it's hard to get firm information from IBM
on various models, even if they even have specific models.
Andy Burns
2019-11-27 21:23:00 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Anyone know how much real memory does the Z series support today?
Unlike the past, it's hard to get firm information from IBM
on various models, even if they even have specific models.
40TB

<https://www.mainline.com/ibm-z15-september-12-2019-announcement>

since that's in RAIM config, I assume less useable memory if you
actually mirror/stripe/whatever it?
h***@bbs.cpcn.com
2019-11-27 21:39:25 UTC
Reply
Permalink
Post by Andy Burns
Post by h***@bbs.cpcn.com
Anyone know how much real memory does the Z series support today?
Unlike the past, it's hard to get firm information from IBM
on various models, even if they even have specific models.
40TB
<https://www.mainline.com/ibm-z15-september-12-2019-announcement>
since that's in RAIM config, I assume less useable memory if you
actually mirror/stripe/whatever it?
OMG.

But I must admit caching is nice. First time I read a file
it takes a minute or two. But each successive access
runs very fast. I assumed the file is stored in memory
in its entirety.
J. Clarke
2019-11-27 23:18:43 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Andy Burns
Post by h***@bbs.cpcn.com
Anyone know how much real memory does the Z series support today?
Unlike the past, it's hard to get firm information from IBM
on various models, even if they even have specific models.
40TB
<https://www.mainline.com/ibm-z15-september-12-2019-announcement>
since that's in RAIM config, I assume less useable memory if you
actually mirror/stripe/whatever it?
OMG.
But I must admit caching is nice. First time I read a file
it takes a minute or two. But each successive access
runs very fast. I assumed the file is stored in memory
in its entirety.
I did an experiment many years ago. I had two machines. One was a 1
GHz Pentium and the other was a 450 MHz Xeon. There was a particular
dataset and program that I was working with. I noticed that the
dataset was larger than the cache on the Pentium but smaller than the
cache on the Xeon. Same code, same OS, same as much else as possible,
the Xeon ran that particular program on that particular dataset twice
as fast.
Alexander Schreiber
2019-12-01 23:23:03 UTC
Reply
Permalink
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Andy Burns
Post by h***@bbs.cpcn.com
Anyone know how much real memory does the Z series support today?
Unlike the past, it's hard to get firm information from IBM
on various models, even if they even have specific models.
40TB
<https://www.mainline.com/ibm-z15-september-12-2019-announcement>
since that's in RAIM config, I assume less useable memory if you
actually mirror/stripe/whatever it?
OMG.
But I must admit caching is nice. First time I read a file
it takes a minute or two. But each successive access
runs very fast. I assumed the file is stored in memory
in its entirety.
I did an experiment many years ago. I had two machines. One was a 1
GHz Pentium and the other was a 450 MHz Xeon. There was a particular
dataset and program that I was working with. I noticed that the
dataset was larger than the cache on the Pentium but smaller than the
cache on the Xeon. Same code, same OS, same as much else as possible,
the Xeon ran that particular program on that particular dataset twice
as fast.
Yes, running pretty much entirely in cache is _great_ for performance.
I've been told that during the days of the i486, the Linux scheduler
was carefully optimized to fit entirely into < 8 KB of memory - because
the i486 had 8 KB of cache.

Kind regards,
Alex.
--
"Opportunity is missed by most people because it is dressed in overalls and
looks like work." -- Thomas A. Edison
Scott Lurndal
2019-12-02 15:37:25 UTC
Reply
Permalink
Post by Alexander Schreiber
Post by J. Clarke
Post by h***@bbs.cpcn.com
Post by Andy Burns
Post by h***@bbs.cpcn.com
Anyone know how much real memory does the Z series support today?
Unlike the past, it's hard to get firm information from IBM
on various models, even if they even have specific models.
40TB
<https://www.mainline.com/ibm-z15-september-12-2019-announcement>
since that's in RAIM config, I assume less useable memory if you
actually mirror/stripe/whatever it?
OMG.
But I must admit caching is nice. First time I read a file
it takes a minute or two. But each successive access
runs very fast. I assumed the file is stored in memory
in its entirety.
I did an experiment many years ago. I had two machines. One was a 1
GHz Pentium and the other was a 450 MHz Xeon. There was a particular
dataset and program that I was working with. I noticed that the
dataset was larger than the cache on the Pentium but smaller than the
cache on the Xeon. Same code, same OS, same as much else as possible,
the Xeon ran that particular program on that particular dataset twice
as fast.
Yes, running pretty much entirely in cache is _great_ for performance.
I've been told that during the days of the i486, the Linux scheduler
was carefully optimized to fit entirely into < 8 KB of memory - because
the i486 had 8 KB of cache.
I suspect that is an urban legend. The scheduler is only needed infrequently,
and most of it doesn't run on context switches.

What your correspondent probably meant was the the frequently used parts
of the scheduler were crafted to consume only a couple of cache lines by
collecting the most commonly used code together; which would leave the
remaining cache lines (some occupied by shared kernel or libc code, others
by user code) in the cache even when scheduling a new thread on the core.
David LaRue
2019-11-28 03:16:44 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Anne & Lynn Wheeler
s/360 & s/370 architecture only having 24bit (16mbyte) addressing
(real & virtual) ... except for 360/67 which still had "real" 24bit
addressing but support 32bit virtual. Note that was the architecture
... specific models might have much smaller number of hardware
address lines ... limiting the actual amount of memory that could be
attached.
Anyone know how much real memory does the Z series support today?
Unlike the past, it's hard to get firm information from IBM
on various models, even if they even have specific models.
Such information would be useless. Even a total system description
could be useless. Compare what you like, but the actual system might be
emulated by a totally different platform. You'd have to tear it apart
to get the specifics and you might be surprised by what you find.
David Wade
2019-11-26 21:35:29 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
The ARCHITECTURE could support up to 16Mb and indeed the 360/67 could
support more, but physically, most 360's could only interface to smaller
amounts of RAM and only implemented a subset of the address lines.
In addition if it spread out too far the propogation delays will slow
the machine down.

For example a 360/40 could have up to 256k and I think a 360/67 could go
up to 2048k. There is a picture of a 360/67 with 512k here:-

http://history.cs.ncl.ac.uk/anniversaries/40th/images/ibm360_672/slide07.html

The core is in the four cabinet right at the back. When I used that
machine it had been upgraded to 1024K so had 8 of the big double cabinets.

The functional characteristics manuals which cover the specs are here:-

http://www.bitsavers.org/pdf/ibm/360/funcChar/

other machines have similar restrictions,so the early VAXs could only
handle 1 or 2Mb of memory, my VLC has 24Mb.

Dave
Bob Eager
2019-11-26 22:21:58 UTC
Reply
Permalink
Post by David Wade
The ARCHITECTURE could support up to 16Mb and indeed the 360/67 could
support more, but physically, most 360's could only interface to smaller
amounts of RAM and only implemented a subset of the address lines.
In addition if it spread out too far the propogation delays will slow
the machine down.
other machines have similar restrictions,so the early VAXs could only
handle 1 or 2Mb of memory, my VLC has 24Mb.
Also see the 80386SX; only 24 address lines but full 32 bit architecture.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Charlie Gibbs
2019-11-26 22:50:46 UTC
Reply
Permalink
Post by David Wade
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
The ARCHITECTURE could support up to 16Mb and indeed the 360/67 could
support more, but physically, most 360's could only interface to smaller
amounts of RAM and only implemented a subset of the address lines.
In addition if it spread out too far the propogation delays will slow
the machine down.
For example a 360/40 could have up to 256k and I think a 360/67 could go
up to 2048k. There is a picture of a 360/67 with 512k here:-
http://history.cs.ncl.ac.uk/anniversaries/40th/images/ibm360_672/slide07.html
The core is in the four cabinet right at the back. When I used that
machine it had been upgraded to 1024K so had 8 of the big double cabinets.
The functional characteristics manuals which cover the specs are here:-
http://www.bitsavers.org/pdf/ibm/360/funcChar/
Yours must have been an older machine. The 360/67 we had at UBC
had 256K in each cabinet. The above manual describes this as two
128K units per cabinet.

Although propagation delays were a factor, I suspect that marketing
was as well. Some shops had to go to a larger CPU than they needed
in order to get enough memory. Third-party suppliers were quick to
jump in. I once used a 360/30 that had 128K of memory, even though
you could only go to 64K on a stock machine. A switch and indicator
light for the extra address bit was placed in an unused portion of
the panel.

I once read about how Greyhound Computer Corporation would buy
360/30s that came off lease and add memory to them. Apparently
you could get up to 512K if you wanted.
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
David Wade
2019-11-27 00:41:04 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by David Wade
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
The ARCHITECTURE could support up to 16Mb and indeed the 360/67 could
support more, but physically, most 360's could only interface to smaller
amounts of RAM and only implemented a subset of the address lines.
In addition if it spread out too far the propogation delays will slow
the machine down.
For example a 360/40 could have up to 256k and I think a 360/67 could go
up to 2048k. There is a picture of a 360/67 with 512k here:-
http://history.cs.ncl.ac.uk/anniversaries/40th/images/ibm360_672/slide07.html
The core is in the four cabinet right at the back. When I used that
machine it had been upgraded to 1024K so had 8 of the big double cabinets.
The functional characteristics manuals which cover the specs are here:-
http://www.bitsavers.org/pdf/ibm/360/funcChar/
Yours must have been an older machine. The 360/67 we had at UBC
had 256K in each cabinet. The above manual describes this as two
128K units per cabinet.
Not sure we are talking about the same cabinet. The store came in 256k
chunks, but when you actually checked there were 2 x 128k cabinets.
I assume it was interleaved...
Post by Charlie Gibbs
Although propagation delays were a factor, I suspect that marketing
was as well. Some shops had to go to a larger CPU than they needed
in order to get enough memory. Third-party suppliers were quick to
jump in. I once used a 360/30 that had 128K of memory, even though
you could only go to 64K on a stock machine. A switch and indicator
light for the extra address bit was placed in an unused portion of
the panel.
I seem to remember hearing that some of those upgrades needed a lot of
re-working of the hardware, on the other hand that was the time of the
expensive "no change" upgrade. Printers where the speed was set by a
link, but getting it changed was expensive.

Disk drives that were 100mb until a wire was cut when they became 200Mb...

etc. etc. etc.
Post by Charlie Gibbs
I once read about how Greyhound Computer Corporation would buy
360/30s that came off lease and add memory to them. Apparently
you could get up to 512K if you wanted.
That was a lot of memory for a 30....

Dave
Charlie Gibbs
2019-11-27 05:49:36 UTC
Reply
Permalink
Post by David Wade
Post by Charlie Gibbs
Although propagation delays were a factor, I suspect that marketing
was as well. Some shops had to go to a larger CPU than they needed
in order to get enough memory. Third-party suppliers were quick to
jump in. I once used a 360/30 that had 128K of memory, even though
you could only go to 64K on a stock machine. A switch and indicator
light for the extra address bit was placed in an unused portion of
the panel.
I seem to remember hearing that some of those upgrades needed a lot of
re-working of the hardware, on the other hand that was the time of the
expensive "no change" upgrade. Printers where the speed was set by a
link, but getting it changed was expensive.
Disk drives that were 100mb until a wire was cut when they became 200Mb...
etc. etc. etc.
This particular /30 also had 3rd party disk and tape drives, along
with the memory upgrades. IBM was Not Pleased.

The ultimate 3rd party story I heard was the shop that had a 370/168
with all plug-compatible peripherals - and then they replaced the
processor with an Amdahl. The only IBM part left was the software.
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
David Wade
2019-11-27 11:22:30 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by David Wade
Post by Charlie Gibbs
Although propagation delays were a factor, I suspect that marketing
was as well. Some shops had to go to a larger CPU than they needed
in order to get enough memory. Third-party suppliers were quick to
jump in. I once used a 360/30 that had 128K of memory, even though
you could only go to 64K on a stock machine. A switch and indicator
light for the extra address bit was placed in an unused portion of
the panel.
I seem to remember hearing that some of those upgrades needed a lot of
re-working of the hardware, on the other hand that was the time of the
expensive "no change" upgrade. Printers where the speed was set by a
link, but getting it changed was expensive.
Disk drives that were 100mb until a wire was cut when they became 200Mb...
etc. etc. etc.
This particular /30 also had 3rd party disk and tape drives, along
with the memory upgrades. IBM was Not Pleased.
The ultimate 3rd party story I heard was the shop that had a 370/168
with all plug-compatible peripherals - and then they replaced the
processor with an Amdahl. The only IBM part left was the software.
I believe that at one time NERC had 3rd party cartridge drives so 3480?
on all its IBM boxes, but the VAXs had IBM drives....

Dave
Charlie Gibbs
2019-11-27 19:54:26 UTC
Reply
Permalink
Post by David Wade
Post by Charlie Gibbs
The ultimate 3rd party story I heard was the shop that had a 370/168
with all plug-compatible peripherals - and then they replaced the
processor with an Amdahl. The only IBM part left was the software.
I believe that at one time NERC had 3rd party cartridge drives so 3480?
on all its IBM boxes, but the VAXs had IBM drives....
When I was with Univac I visited a customer site to do some work on their
90/30. While waiting for a run I wandered over to the 11/70 that was in
the same room, and popped the back off one of the RP06 drives, only to
find a plate reading "ISS / Sperry Univac".
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
h***@bbs.cpcn.com
2019-11-27 21:05:55 UTC
Reply
Permalink
Post by Charlie Gibbs
When I was with Univac I visited a customer site to do some work on their
90/30. While waiting for a run I wandered over to the 11/70 that was in
the same room, and popped the back off one of the RP06 drives, only to
find a plate reading "ISS / Sperry Univac".
Our 90/30 had circuit cards by Intel.

Bell Telephone bought a lot of hardware from their rival,
Automatic Electric Co.
h***@bbs.cpcn.com
2019-11-27 21:04:14 UTC
Reply
Permalink
Post by Charlie Gibbs
This particular /30 also had 3rd party disk and tape drives, along
with the memory upgrades. IBM was Not Pleased.
The S/360 history explains some of what was going on. Part
of the third-party hardware was the result of IBM engineers
'jumping ship' after S/360 development, taking their
expertise to newly formed companies. IBM was pissed about
that. But my personal impression is that IBM somewhat created
that situation by working their people ridiculously hard
during S/360 days and then not compensating them adequately.

Another part was that S/360 was very successful and IBM
was unable to meet demand.

Another part was that IBM wasn't cheap. IBM gave full
support, but at a price. Later, after unbundling, IBM
was more competitive.

Of course, a third party company was exploiting IBM's
research and development. But there were risks--they
could make a lot of money, but if IBM upgraded its
hardware, they could lose a lot of money.

Tom Watson Jr initially had the attitude that IBM
was entitled to own virtually the entire DP marketplace.
He enacted policies got IBM sued. Later he woke up
and changed their policies, so when the govt sued, IBM
won.
Post by Charlie Gibbs
The ultimate 3rd party story I heard was the shop that had a 370/168
with all plug-compatible peripherals - and then they replaced the
processor with an Amdahl. The only IBM part left was the software.
Not unusual.

Our hospital leased its 360 from a third party leasing company.
IBM hated that, but they still gave us nice support. We rented
a few things and maintenance from them.
Anne & Lynn Wheeler
2019-11-27 23:11:56 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Another part was that S/360 was very successful and IBM
was unable to meet demand.
Another part was that IBM wasn't cheap. IBM gave full
support, but at a price. Later, after unbundling, IBM
was more competitive.
Of course, a third party company was exploiting IBM's
research and development. But there were risks--they
could make a lot of money, but if IBM upgraded its
hardware, they could lose a lot of money.
Tom Watson Jr initially had the attitude that IBM
was entitled to own virtually the entire DP marketplace.
He enacted policies got IBM sued. Later he woke up
and changed their policies, so when the govt sued, IBM
won.
rise and fall of ibm
https://www.ecole.org/en/session/49-the-rise-and-fall-of-ibm
article ... mention coutmeasure to clone controllers&devices
https://www.ecole.org/en/65/CM200195-ENG.pdf
IBM tried to react by launching a major project called the 'Future
System' (FS) in the early 1970's. The idea was to get so far ahead
that the competition would never be able to keep up, and to have such
a high level of integration that it would be impossible for
competitors to follow a compatible niche strategy. However, the
project failed because the objectives were too ambitious for the
available technology. Many of the ideas that were developed were
nevertheless adapted for later generations. Once IBM had acknowledged
this failure, it launched its 'box strategy', which called for
competitiveness with all the different types of compatible
sub-systems. But this proved to be difficult because of IBM's cost
structure and its R&D spending, and the strategy only resulted in a
partial narrowing of the price gap between IBM and its rivals.

End of ACS/360, IBM executives worried that it would advance the
state-of-art too fast and they would loose control of the market. Amdahl
leaves IBM shortly afterwards and starts his own clone processor
company.
https://people.cs.clemson.edu/~mark/acs_end.html

Note that internal politics during the FS period was stopping/shutting
down 370 efforts (because FS was completely different and was going to
completely replace 370). The lack of new 370 products during the FS
period is credited with giving clone processor makers market foothold.

23jun1969 unbundling posts
http://www.garlic.com/~lynn/submain.html#unbundling
future system posts
http://www.garlic.com/~lynn/submain.html#futuresys

clone controller trrivia: 3 people from science center installed
(virutal machine) CP67 at the univ. last weekend jan1968.
https://en.wikipedia.org/wiki/CP/CMS

It had 2741&1052 terminal support with automatic terminal type
identification. The univ. had some number of ascii/tty terminals and so
I extended the support to ascii/tty (including auto-terminal type
identification, using the IBM terminal controller SAD CCW to switch port
terminal type scanner). I then wanted to extend to having a single
dial-up number of all terminal types (hunt group) ... but it didn't
quiet work, while SAD command allowed switching scanner type, all ports
had fixed hardwired port speed (could dynamically connect terminals to
any port with wrong speed).

This was motivation for the univ. to start clone controller project,
building controler channel attach board for Interdata/3 programmed to
emulate IBM terminal controller ... but including support for doing
dynamic line speed. This was then enhanced with a Interdata/4 for the
channel interface and cluster of Interdata/3s for port scanners ... and
Interdata started selling them as IBM clone controllers (later under
Perkin/Elmer logo after they bought Interdata). Four of us get written
up responsible for (some part of) clone controller business
https://en.wikipedia.org/wiki/Interdata

science center posts
http://www.garlic.com/~lynn/subtopic.html#545tech
clone controller posts
http://www.garlic.com/~lynn/subtopic.html#360pcm

tty support trivia ... I did hack using 1byte field for tty/ascii
terminal lengths. IBM had included the support in standard distributed
system. One of the cp67 installations in bldg. across the court from 545
in tech sq ... somebody got a ascii plotter down at harvard, needing
1200? byte line length. The just change max length field to 1200, but
not the rest of the code ... so the system would crash as soon as tty
connect.
http://www.multicians.org/thvv/360-67.html
--
virtualization experience starting Jan1968, online at home since Mar1970
Charlie Gibbs
2019-11-28 18:08:29 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
This particular /30 also had 3rd party disk and tape drives, along
with the memory upgrades. IBM was Not Pleased.
The S/360 history explains some of what was going on. Part
of the third-party hardware was the result of IBM engineers
'jumping ship' after S/360 development, taking their
expertise to newly formed companies. IBM was pissed about
that. But my personal impression is that IBM somewhat created
that situation by working their people ridiculously hard
during S/360 days and then not compensating them adequately.
Their arbitrary limits on how much memory could be attached
to various processors in the 360 line didn't help either.
Post by h***@bbs.cpcn.com
Another part was that S/360 was very successful and IBM
was unable to meet demand.
Another part was that IBM wasn't cheap. IBM gave full
support, but at a price. Later, after unbundling, IBM
was more competitive.
Of course, a third party company was exploiting IBM's
research and development. But there were risks--they
could make a lot of money, but if IBM upgraded its
hardware, they could lose a lot of money.
There was an ongoing cat-and-mouse game for a while.
IBM would change the interface specifications, which
shut out the plug-compatible manufacturers for the year
that it took them to reverse-engineer the new interface.
Wash, rinse, repeat.
Post by h***@bbs.cpcn.com
Tom Watson Jr initially had the attitude that IBM
was entitled to own virtually the entire DP marketplace.
He enacted policies got IBM sued. Later he woke up
and changed their policies, so when the govt sued, IBM
won.
It helped that IBM was able to generate so many wheelbarrows
of paper that the government was swamped.
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ "Alexa, define 'bugging'."
J. Clarke
2019-11-28 18:28:11 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
This particular /30 also had 3rd party disk and tape drives, along
with the memory upgrades. IBM was Not Pleased.
The S/360 history explains some of what was going on. Part
of the third-party hardware was the result of IBM engineers
'jumping ship' after S/360 development, taking their
expertise to newly formed companies. IBM was pissed about
that. But my personal impression is that IBM somewhat created
that situation by working their people ridiculously hard
during S/360 days and then not compensating them adequately.
Their arbitrary limits on how much memory could be attached
to various processors in the 360 line didn't help either.
Post by h***@bbs.cpcn.com
Another part was that S/360 was very successful and IBM
was unable to meet demand.
Another part was that IBM wasn't cheap. IBM gave full
support, but at a price. Later, after unbundling, IBM
was more competitive.
Of course, a third party company was exploiting IBM's
research and development. But there were risks--they
could make a lot of money, but if IBM upgraded its
hardware, they could lose a lot of money.
There was an ongoing cat-and-mouse game for a while.
IBM would change the interface specifications, which
shut out the plug-compatible manufacturers for the year
that it took them to reverse-engineer the new interface.
Wash, rinse, repeat.
Post by h***@bbs.cpcn.com
Tom Watson Jr initially had the attitude that IBM
was entitled to own virtually the entire DP marketplace.
He enacted policies got IBM sued. Later he woke up
and changed their policies, so when the govt sued, IBM
won.
It helped that IBM was able to generate so many wheelbarrows
of paper that the government was swamped.
Double win--they probably had to obtain a computer from IBM to track
it all.
Quadibloc
2019-12-01 21:16:58 UTC
Reply
Permalink
Post by J. Clarke
Double win--they probably had to obtain a computer from IBM to track
it all.
Unless they needed an even bigger computer from Control Data. (Which, unlike
the U.S. Government, succeeded in its lawsuit against IBM.)

John Savard
Niklas Karlsson
2019-12-03 15:14:02 UTC
Reply
Permalink
Post by Quadibloc
Post by J. Clarke
Double win--they probably had to obtain a computer from IBM to track
it all.
Unless they needed an even bigger computer from Control Data. (Which, unlike
the U.S. Government, succeeded in its lawsuit against IBM.)
"Before it controls you," indeed.

Niklas
--
"Once packets are in, who cares where they go out? That's not my department,"
says Wernher von Route.
h***@bbs.cpcn.com
2019-12-04 21:01:18 UTC
Reply
Permalink
Post by Quadibloc
Post by J. Clarke
Double win--they probably had to obtain a computer from IBM to track
it all.
Unless they needed an even bigger computer from Control Data. (Which, unlike
the U.S. Government, succeeded in its lawsuit against IBM.)
The US Govt was counting on the CDC lawsuit to help its case.
But when IBM settled with CDC out of court, both sides
destroyed their papers, as was standard practice, and was
perfectly legal. Those papers would've helped the govt.

I am not a lawyer, but I think IBM had changed its practices
adequately enough that it did not deserve the 1970s Federal
lawsuit and it was a needless waste of resources.

Indeed, sometimes I think Federal antitrust efforts aren't
very good. It seems they go after companies that aren't
monopolies but ignore the ones that are.

For instance, there was a relatively small wire company,
Roebling, that wanted to merge into Bethlehem Steel in
1970. The government wouldn't let them. Roebling went
out of business.

On the flip side, I think banks are way too big.

I think Amazon is way too big.
Dan Espen
2019-11-29 00:41:22 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
This particular /30 also had 3rd party disk and tape drives, along
with the memory upgrades. IBM was Not Pleased.
The S/360 history explains some of what was going on. Part
of the third-party hardware was the result of IBM engineers
'jumping ship' after S/360 development, taking their
expertise to newly formed companies. IBM was pissed about
that. But my personal impression is that IBM somewhat created
that situation by working their people ridiculously hard
during S/360 days and then not compensating them adequately.
Their arbitrary limits on how much memory could be attached
to various processors in the 360 line didn't help either.
Post by h***@bbs.cpcn.com
Another part was that S/360 was very successful and IBM
was unable to meet demand.
Another part was that IBM wasn't cheap. IBM gave full
support, but at a price. Later, after unbundling, IBM
was more competitive.
Of course, a third party company was exploiting IBM's
research and development. But there were risks--they
could make a lot of money, but if IBM upgraded its
hardware, they could lose a lot of money.
There was an ongoing cat-and-mouse game for a while.
IBM would change the interface specifications, which
shut out the plug-compatible manufacturers for the year
that it took them to reverse-engineer the new interface.
Wash, rinse, repeat.
Honestly, I can't remember IBM changing any interface.

They invented lots of new interfaces to keep the competitors
at bay. Mostly unsuccessfully.
--
Dan Espen
r***@gmail.com
2020-03-08 23:07:59 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
In 1964, that was false advertising.
Quadibloc
2020-03-08 23:50:57 UTC
Reply
Permalink
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.

There were adjustments to the initially announced lineup, so "Model 60" and
"Model 70" were replaced by the model 65 and model 75.

So the question to ask is: was it possible to order a System/360 Model 75 with 8
megabytes of core storage?

Looking at 360/75 Functional Characteristics, the configurations avalable are
the 2075H, 2075I, and 2075J, with 256 Kbytes, 512 Kbytes, and 1 Mbyte
respectively. This was 750 nanosecond memory, in 2365 Processor Storage modules.

Also, you could get 8 microsecond external 2631 Core Storage. Model 2 had a
capacity of 2 Mb, and you could attach up to four of them.

So if you were willing to settle for 8 microsecond memory, yes, you could have 8
megabytes of core on a system that you could have ordered on the announcement
date.

John Savard
Peter Flass
2020-03-09 00:19:38 UTC
Reply
Permalink
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model 60" and
"Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360 Model 75 with 8
megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations avalable are
the 2075H, 2075I, and 2075J, with 256 Kbytes, 512 Kbytes, and 1 Mbyte
respectively. This was 750 nanosecond memory, in 2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2 had a
capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you could have 8
megabytes of core on a system that you could have ordered on the announcement
date.
Interestingly, OS/360 had features for handling heterogeneous memory. You
could specify program sections to be loaded into slow core and others into
fast. Both the Linkage Editor and JCL allowed for it.
Post by Quadibloc
John Savard
--
Pete
Quadibloc
2020-03-09 20:19:28 UTC
Reply
Permalink
Post by Peter Flass
Interestingly, OS/360 had features for handling heterogeneous memory. You
could specify program sections to be loaded into slow core and others into
fast. Both the Linkage Editor and JCL allowed for it.
I'm not surprised. It's a good thing they did have some provision for that,
otherwise there would be no point in this, because it would just slow the whole
machine down.

But even with a manual provision like that, slow memory is very difficult to
make use of properly, I would think. The way to use it to speed up a computer
instead of slowing it down would be, for example, to use it as a RAM disk or as
a disk cache.

John Savard
Peter Flass
2020-03-09 23:18:48 UTC
Reply
Permalink
Post by Quadibloc
Post by Peter Flass
Interestingly, OS/360 had features for handling heterogeneous memory. You
could specify program sections to be loaded into slow core and others into
fast. Both the Linkage Editor and JCL allowed for it.
I'm not surprised. It's a good thing they did have some provision for that,
otherwise there would be no point in this, because it would just slow the whole
machine down.
But even with a manual provision like that, slow memory is very difficult to
make use of properly, I would think. The way to use it to speed up a computer
instead of slowing it down would be, for example, to use it as a RAM disk or as
a disk cache.
Not enough to do much, but one usermod kept the SYSRES VTOC in core, IIRC.
--
Pete
h***@bbs.cpcn.com
2020-03-09 19:16:06 UTC
Reply
Permalink
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model 60" and
"Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360 Model 75 with 8
megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations avalable are
the 2075H, 2075I, and 2075J, with 256 Kbytes, 512 Kbytes, and 1 Mbyte
respectively. This was 750 nanosecond memory, in 2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2 had a
capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you could have 8
megabytes of core on a system that you could have ordered on the announcement
date.
I don't have my S/360 history in front of me, but if memory
serves, the development of the auxiliary core storage went
slowly and was not a successful product (please correct me
if I'm wrong).

Further, I think the amount of storage available for S/360
varied over time. For instance, on the low end, I think the
model 30 was offered with only 8k but later upgraded to 16k.
I suspect the large end had similar situations. Anyway,
I suspect the "Functional Characteristics" for any given
S/360 model would be significantly revised over the years.

I never read Pugh's book on memory, which probably would
explain a lot.
Dan Espen
2020-03-09 20:35:11 UTC
Reply
Permalink
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.

As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.

I told an IBM salesman once that IBM had so bollixed up the architecture
that our application programs were easily twice as large as they were on
the 14xx. He checked with his technical people and had to concede the
point.
--
Dan Espen
Peter Flass
2020-03-09 23:18:49 UTC
Reply
Permalink
Post by Dan Espen
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
The places I worked at all had 32K. A /40 was more likely to have 64K.
Post by Dan Espen
I told an IBM salesman once that IBM had so bollixed up the architecture
that our application programs were easily twice as large as they were on
the 14xx. He checked with his technical people and had to concede the
point.
--
Pete
Dan Espen
2020-03-09 23:41:35 UTC
Reply
Permalink
Post by Peter Flass
Post by Dan Espen
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
The places I worked at all had 32K. A /40 was more likely to have 64K.
I believe all the 30s I saw with 32K pretty quickly got upgraded to 64K.
One place hired me to do a DOS Sysgen so they could use the 64K they
just got upgraded to.

I ended up spending a very long weekend working out why it wouldn't
work. After many hours I decoded the boot logic and single stepped
through boot until I got to the part where DOS cleared all memory
256 bytes at a time.

Then I saw the machine check when it first addressed the memory at 32K+1.

Definitely earned my consulting fee for that.

I worked at a few places that got the 3rd party model 30 upgrade that let
the machine address more than 64K. The 30 was fast enough but 64K
wasn't really adequate for 2 batch regions running COBOL applications.
IBM, as always, was protecting it's income stream. They must have known
that larger 30s were needed but they really wanted to sell the more
expensive 40, 50, 65s.

A 30 could easily keep 2 card readers and printers running all out.
Probably even 3 or 4.

Later I worked at a mailing operation. They ran a dozen 1403s at once
but used a 65.
--
Dan Espen
Peter Flass
2020-03-10 19:29:56 UTC
Reply
Permalink
Post by Dan Espen
Post by Peter Flass
Post by Dan Espen
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
The places I worked at all had 32K. A /40 was more likely to have 64K.
I believe all the 30s I saw with 32K pretty quickly got upgraded to 64K.
One place hired me to do a DOS Sysgen so they could use the 64K they
just got upgraded to.
I ended up spending a very long weekend working out why it wouldn't
work. After many hours I decoded the boot logic and single stepped
through boot until I got to the part where DOS cleared all memory
256 bytes at a time.
Then I saw the machine check when it first addressed the memory at 32K+1.
Definitely earned my consulting fee for that.
I think there was a jumper that needed to be set for >32K, so maybe someone
installed the memory and forgot the jumper?
Post by Dan Espen
I worked at a few places that got the 3rd party model 30 upgrade that let
the machine address more than 64K. The 30 was fast enough but 64K
wasn't really adequate for 2 batch regions running COBOL applications.
IBM, as always, was protecting it's income stream. They must have known
that larger 30s were needed but they really wanted to sell the more
expensive 40, 50, 65s.
A 30 could easily keep 2 card readers and printers running all out.
Probably even 3 or 4.
Once they got POWER.
Post by Dan Espen
Later I worked at a mailing operation. They ran a dozen 1403s at once
but used a 65.
--
Pete
Dan Espen
2020-03-10 23:02:47 UTC
Reply
Permalink
Post by Peter Flass
Post by Dan Espen
Post by Peter Flass
Post by Dan Espen
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
The places I worked at all had 32K. A /40 was more likely to have 64K.
I believe all the 30s I saw with 32K pretty quickly got upgraded to 64K.
One place hired me to do a DOS Sysgen so they could use the 64K they
just got upgraded to.
I ended up spending a very long weekend working out why it wouldn't
work. After many hours I decoded the boot logic and single stepped
through boot until I got to the part where DOS cleared all memory
256 bytes at a time.
Then I saw the machine check when it first addressed the memory at 32K+1.
Definitely earned my consulting fee for that.
I think there was a jumper that needed to be set for >32K, so maybe someone
installed the memory and forgot the jumper?
Wow, this was years ago, I'd guess 1969,
but that rings a bell. I think you got it.

The memory was installed by IBM, the IBM CE came in and fixed it.
All we were seeing is the sysgen would work, but any attempt to boot
locked up the machine.
The reference the the bad memory caused a machine check and
all that did was try to run a non-existent machine check
PSW. There was nothing in memory to even point at the
failing instruction. Only single stepping thru the long core
clearing process turned on the lights.
--
Dan Espen
h***@bbs.cpcn.com
2020-03-11 19:09:49 UTC
Reply
Permalink
Post by Peter Flass
Post by Dan Espen
Post by Peter Flass
Post by Dan Espen
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
The places I worked at all had 32K. A /40 was more likely to have 64K.
I believe all the 30s I saw with 32K pretty quickly got upgraded to 64K.
One place hired me to do a DOS Sysgen so they could use the 64K they
just got upgraded to.
I ended up spending a very long weekend working out why it wouldn't
work. After many hours I decoded the boot logic and single stepped
through boot until I got to the part where DOS cleared all memory
256 bytes at a time.
Then I saw the machine check when it first addressed the memory at 32K+1.
Definitely earned my consulting fee for that.
I think there was a jumper that needed to be set for >32K, so maybe someone
installed the memory and forgot the jumper?
Post by Dan Espen
I worked at a few places that got the 3rd party model 30 upgrade that let
the machine address more than 64K. The 30 was fast enough but 64K
wasn't really adequate for 2 batch regions running COBOL applications.
IBM, as always, was protecting it's income stream. They must have known
that larger 30s were needed but they really wanted to sell the more
expensive 40, 50, 65s.
A 30 could easily keep 2 card readers and printers running all out.
Probably even 3 or 4.
Once they got POWER.
Our 360-40 site installed POWER in a foreground partition
It was easy to installed and ran fine. The productivity
improvement was amazing--almost doubling throughput. No
programming changes to our applications were required.

I was amazed at how it somehow captured all unit record
I/O and stored it on disk for later handling, automatically.

Before POWER, it was obvious our CPU was I/O bound.
When reading and printing at full speed, the CPU wait light
was almost continuously on, indicating there were plenty of
CPU cycles available.

Fast forward to the future, there was a check printing
application in which the spooler was bypassed and
the checks printed 'hot'.
Peter Flass
2020-03-11 21:52:19 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Peter Flass
Post by Dan Espen
Post by Peter Flass
Post by Dan Espen
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
The places I worked at all had 32K. A /40 was more likely to have 64K.
I believe all the 30s I saw with 32K pretty quickly got upgraded to 64K.
One place hired me to do a DOS Sysgen so they could use the 64K they
just got upgraded to.
I ended up spending a very long weekend working out why it wouldn't
work. After many hours I decoded the boot logic and single stepped
through boot until I got to the part where DOS cleared all memory
256 bytes at a time.
Then I saw the machine check when it first addressed the memory at 32K+1.
Definitely earned my consulting fee for that.
I think there was a jumper that needed to be set for >32K, so maybe someone
installed the memory and forgot the jumper?
Post by Dan Espen
I worked at a few places that got the 3rd party model 30 upgrade that let
the machine address more than 64K. The 30 was fast enough but 64K
wasn't really adequate for 2 batch regions running COBOL applications.
IBM, as always, was protecting it's income stream. They must have known
that larger 30s were needed but they really wanted to sell the more
expensive 40, 50, 65s.
A 30 could easily keep 2 card readers and printers running all out.
Probably even 3 or 4.
Once they got POWER.
Our 360-40 site installed POWER in a foreground partition
It was easy to installed and ran fine. The productivity
improvement was amazing--almost doubling throughput. No
programming changes to our applications were required.
I was amazed at how it somehow captured all unit record
I/O and stored it on disk for later handling, automatically.
Before POWER, it was obvious our CPU was I/O bound.
When reading and printing at full speed, the CPU wait light
was almost continuously on, indicating there were plenty of
CPU cycles available.
Fast forward to the future, there was a check printing
application in which the spooler was bypassed and
the checks printed 'hot'.
Typical. The ones I’ve seen the program would print a dummy check of all
“X”s for alignment and then ask the operator to verify and reply if OK, or
adjust and reprint again. None of this can be done thru a spooler (AFAIK).
--
Pete
Charlie Gibbs
2020-03-12 19:08:31 UTC
Reply
Permalink
Post by Peter Flass
Post by h***@bbs.cpcn.com
Fast forward to the future, there was a check printing
application in which the spooler was bypassed and
the checks printed 'hot'.
Typical. The ones I’ve seen the program would print a dummy check of all
“X”s for alignment and then ask the operator to verify and reply if OK, or
adjust and reprint again. None of this can be done thru a spooler (AFAIK).
The fun began when you tried to ensure that the computer-generated
cheque number lined up with the pre-printed ones on the cheques.
You'd pray that there were no forms jams and that the operator
didn't need too many tries to get things aligned.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Dan Espen
2020-03-12 20:46:47 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by Peter Flass
Post by h***@bbs.cpcn.com
Fast forward to the future, there was a check printing
application in which the spooler was bypassed and
the checks printed 'hot'.
Typical. The ones I’ve seen the program would print a dummy check of all
“X”s for alignment and then ask the operator to verify and reply if OK, or
adjust and reprint again. None of this can be done thru a spooler (AFAIK).
The fun began when you tried to ensure that the computer-generated
cheque number lined up with the pre-printed ones on the cheques.
You'd pray that there were no forms jams and that the operator
didn't need too many tries to get things aligned.
I could not convince payroll that computer printed check numbers
instead of pre-printed would work.

They did keep track of every blank check by accounting for the check
numbers. So perhaps they had a point.
--
Dan Espen
h***@bbs.cpcn.com
2020-03-12 22:07:47 UTC
Reply
Permalink
Post by Dan Espen
Post by Charlie Gibbs
Post by Peter Flass
Post by h***@bbs.cpcn.com
Fast forward to the future, there was a check printing
application in which the spooler was bypassed and
the checks printed 'hot'.
Typical. The ones I’ve seen the program would print a dummy check of all
“X”s for alignment and then ask the operator to verify and reply if OK, or
adjust and reprint again. None of this can be done thru a spooler (AFAIK).
The fun began when you tried to ensure that the computer-generated
cheque number lined up with the pre-printed ones on the cheques.
You'd pray that there were no forms jams and that the operator
didn't need too many tries to get things aligned.
I could not convince payroll that computer printed check numbers
instead of pre-printed would work.
They did keep track of every blank check by accounting for the check
numbers. So perhaps they had a point.
I think auditors like to track by the pre-printed check numbers.
Everywhere I worked the operators had to account for every single
check, both used to align the printer, other waste, and actually
printed.

In some applications the pre-printed check number was to match
the computer generated check number.

Checks had a lot of controls. Apparently they were necessary
since forgery and theft could be a problem.
h***@bbs.cpcn.com
2020-03-11 19:10:47 UTC
Reply
Permalink
Post by Peter Flass
I think there was a jumper that needed to be set for >32K, so maybe someone
installed the memory and forgot the jumper?
When we added memory to our 360-40, they screwed up the jumper
and memory access was screwy until it was fixed.
Charlie Gibbs
2020-03-10 01:06:37 UTC
Reply
Permalink
Post by Dan Espen
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
I told an IBM salesman once that IBM had so bollixed up the architecture
that our application programs were easily twice as large as they were on
the 14xx. He checked with his technical people and had to concede the
point.
Or maybe it was like the Univac salesmen who consistently low-balled
memory requirements to get a low bid. Many customers found themselves
buying memory expansions they were told they'd never need, after their
new installation bogged down. Eventually even the documentation admitted
to memory requirements of various packages being more than they originally
claimed.

Besides, the memory size limits on the various 360 processors were
largely self-imposed. At worst it took a bit of work with a soldering
iron to work around them. I once used a 360/30 with 128K of core; the
switch and indicator for the extra address bit had a definite home-brewed
look to them.

Greyhound computer bought up a lot of 360/30 processors that had come
off lease, and hung as much as 512K on them.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
h***@bbs.cpcn.com
2020-03-10 18:33:05 UTC
Reply
Permalink
Post by Charlie Gibbs
Or maybe it was like the Univac salesmen who consistently low-balled
memory requirements to get a low bid. Many customers found themselves
buying memory expansions they were told they'd never need, after their
new installation bogged down. Eventually even the documentation admitted
to memory requirements of various packages being more than they originally
claimed.
They say IBM lowballed, too (see above).

FWIW, our 90/30 was adequately configured. I think we had 256k.
Of course, we weren't doing very much online work, it was mostly
batch and single stream.
Post by Charlie Gibbs
Besides, the memory size limits on the various 360 processors were
largely self-imposed. At worst it took a bit of work with a soldering
iron to work around them. I once used a 360/30 with 128K of core; the
switch and indicator for the extra address bit had a definite home-brewed
look to them.
Greyhound computer bought up a lot of 360/30 processors that had come
off lease, and hung as much as 512K on them.
Anyone know how a bus company got into computers?
Charlie Gibbs
2020-03-10 18:48:29 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Or maybe it was like the Univac salesmen who consistently low-balled
memory requirements to get a low bid. Many customers found themselves
buying memory expansions they were told they'd never need, after their
new installation bogged down. Eventually even the documentation admitted
to memory requirements of various packages being more than they originally
claimed.
They say IBM lowballed, too (see above).
FWIW, our 90/30 was adequately configured. I think we had 256k.
Of course, we weren't doing very much online work, it was mostly
batch and single stream.
Luxury! I don't know whether any 90/30 shop here in Vancouver had
256K. Most were 192K; there were some 128K shops, but they were
uncomfortably tight.

This didn't stop Univac from advertising that their equivalent of
CICS (confusingly called IMS/90) could run in as little as 32K.
The fine print mentioned that this was for a system running three
terminals doing simple inquiries into a single ISAM file. And in
subsequent releases they admitted you needed 64K for even that.

(Actually, make that "65K". Since this is sales literature,
the "salesman's K" is used throughout. Where else but in the
wild and wacky world of marketing could you add 32 and 32 to
get 65?)
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
h***@bbs.cpcn.com
2020-03-10 19:09:41 UTC
Reply
Permalink
Post by Charlie Gibbs
This didn't stop Univac from advertising that their equivalent of
CICS (confusingly called IMS/90) could run in as little as 32K.
The fine print mentioned that this was for a system running three
terminals doing simple inquiries into a single ISAM file. And in
subsequent releases they admitted you needed 64K for even that.
I don't recall the name of the online monitor Univac
provided us, but it was a total piece of crap. We never
could run anything on it. Unlike even early versions of
CICS, it required extensive hex and binary coding for
both the screen maps and applications.

We had training at Univac HQ and that was crap too. The
instructor was unprepared. There was no computer time
available as needed. Last minute arrangements were rushed.

I was shocked Univac would offer something to a customer
so crappy.

I was too low on the totem pole to know background,
but I didn't quite understand the point of the 90 series
product line. Unlike other Univac products, it was a
byte machine like a S/360. But even though it was similar
to a S/360 architecture, it wasn't marketed as such.
So, it really didn't fit in the Univac world. We had
conflicts where others would code in ways that cause
abends--they didn't understand you needed a carriage
control and couldn't have spaces in a numeric field
in the 360 world. They didn't understand ISAM.


http://bitsavers.org/pdf/univac/series_90/Univac_90_30_System_Brochure_Mar74.pdf

I think Univac would've done better if they just marketed
it as a S/360 compatible box.

As mentioned, the only positive I saw out of the whole
experience was that Univac personnel were friendly nice
people, more down to earth than IBM people.

(I heard a few months after I left that employer they ditched
the Univac and got a different machine. A new director also
fired most of the staff. Glad I left early.)
Charlie Gibbs
2020-03-10 22:46:37 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
This didn't stop Univac from advertising that their equivalent of
CICS (confusingly called IMS/90) could run in as little as 32K.
The fine print mentioned that this was for a system running three
terminals doing simple inquiries into a single ISAM file. And in
subsequent releases they admitted you needed 64K for even that.
I don't recall the name of the online monitor Univac
provided us, but it was a total piece of crap. We never
could run anything on it.
That would have been IMS. I described it as being designed
to maximize system and programmer activity. Every time you
sneezed you had to regenerate it, a process that consumed
45 minutes and spat out a couple of hundred pages of largely
useless listings. Testing was limited to after hours, since
you'd have to take over the entire user network. I eventually
wrote a single user-simulator that would read an IMSgen deck
and configure itself at startup (in approximately zero seconds),
and which only took over the programmer's network (which ran on
a different set of lines with a totally different protocol).
This meant I could run quick tests during the day, if I could
persuade the rest of the programming staff to go for coffee.

Allinson-Ross, in Toronto, developed a monitor called TIP/30,
which was easier to use - and it could run on the same network
as the programming utilities, which made life a lot easier.
TIP/30 had a programming editor which IIRC was based on an
editor from Bell Labs, and you also had online access to
spool files, which was nice for reviewing test runs.
Post by h***@bbs.cpcn.com
Unlike even early versions of CICS, it required extensive
hex and binary coding for both the screen maps and applications.
Ah, yes... however, DICE (device-independent control expressions)
simplified things somewhat. I got pretty adept at building and
dissecting control strings in COBOL68. (COBOL74's STRING and
UNSTRING verbs were a godsend.)
Post by h***@bbs.cpcn.com
We had training at Univac HQ and that was crap too. The
instructor was unprepared. There was no computer time
available as needed. Last minute arrangements were rushed.
I once went to a JCL cource at Univac, and was asking so
many questions that the instructor had me get up and run
part of the class. :-)
Post by h***@bbs.cpcn.com
I was shocked Univac would offer something to a customer
so crappy.
I was too low on the totem pole to know background,
but I didn't quite understand the point of the 90 series
product line. Unlike other Univac products, it was a
byte machine like a S/360. But even though it was similar
to a S/360 architecture, it wasn't marketed as such.
I think there was a lot of the NIH syndrome involved.
I remember one sales rally which I was required to attend;
the presenter was going on about "product differentiation",
which I took to mean: "Make it different, even if it means
screwing it up."
Post by h***@bbs.cpcn.com
So, it really didn't fit in the Univac world. We had
conflicts where others would code in ways that cause
abends--they didn't understand you needed a carriage
control and couldn't have spaces in a numeric field
in the 360 world. They didn't understand ISAM.
I never did get a chance to play with the 1100 series machines.
They were a totally different beast, handled by a different
department within the Univac office.
Post by h***@bbs.cpcn.com
http://bitsavers.org/pdf/univac/series_90/Univac_90_30_System_Brochure_Mar74.pdf
I think Univac would've done better if they just marketed
it as a S/360 compatible box.
It was definitely a workalike. The 90/30's non-privileged
instruction set was bit-for-bit identical with that of the
360/50. The I/O architecture and most privileged instructions
were completely different, though.
Post by h***@bbs.cpcn.com
As mentioned, the only positive I saw out of the whole
experience was that Univac personnel were friendly nice
people, more down to earth than IBM people.
+1
Post by h***@bbs.cpcn.com
(I heard a few months after I left that employer they ditched
the Univac and got a different machine. A new director also
fired most of the staff. Glad I left early.)
Yup, you dodged that bullet... :-)
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
h***@bbs.cpcn.com
2020-03-11 19:18:09 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
I think Univac would've done better if they just marketed
it as a S/360 compatible box.
It was definitely a workalike. The 90/30's non-privileged
instruction set was bit-for-bit identical with that of the
360/50. The I/O architecture and most privileged instructions
were completely different, though.
Hmm...according to Wikipedia, the 90 series was a popular
and profitable product line for Univac. Apparently it
was derived from both the 9700 series and RCA's Spectra
series.

https://en.wikipedia.org/wiki/UNIVAC_Series_90


For customers, a key issue is cost. My guess was that
our 90/30 was approximately the same as a 370-135 but without
virtual storage. My guess is that Univac was cheaper, maybe
a lot cheaper than IBM.

But, as mentioned, that particular employer didn't seem very
sure of what the wanted to do on the machine beyond some
statistical reporting.

As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
Charlie Gibbs
2020-03-11 19:49:45 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
This would make it just as hard to convert programs from an
1100-series machine to a 360 as to a 90/30. The issue was the
1100's word-based architecture. Converting between 90/30 and
360, or earlier Univac 360-lookalikes, was fairly simple, at
least from the standpoint of data files.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
h***@bbs.cpcn.com
2020-03-11 20:11:53 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
This would make it just as hard to convert programs from an
1100-series machine to a 360 as to a 90/30. The issue was the
1100's word-based architecture. Converting between 90/30 and
360, or earlier Univac 360-lookalikes, was fairly simple, at
least from the standpoint of data files.
It was hard converting RPG programs from the 9700 to 90/30.

I don't know if object or load decks from the 90/30 would
run on a System/360, but my guess is that recompiling
would be easy.
Charlie Gibbs
2020-03-12 19:08:30 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
This would make it just as hard to convert programs from an
1100-series machine to a 360 as to a 90/30. The issue was the
1100's word-based architecture. Converting between 90/30 and
360, or earlier Univac 360-lookalikes, was fairly simple, at
least from the standpoint of data files.
It was hard converting RPG programs from the 9700 to 90/30.
I once converted a set of RPG programs from an IBM System/38
to OS/3. These were 3000-line interactive monsters that
proved that the world had forgotten what RPG stood for.
It was not fun.
Post by h***@bbs.cpcn.com
I don't know if object or load decks from the 90/30 would
run on a System/360,
Definitely not. Different format, different ABIs.
Post by h***@bbs.cpcn.com
but my guess is that recompiling would be easy.
Yup. A few system-specific features (particularly the H card),
but otherwise things were pretty much the same.

I once converted a fairly large assembly-language batch program
from OS/3 to DOS/VS. Aside from the different APIs (e.g. I/O),
it wasn't too difficult.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Dan Espen
2020-03-12 20:42:12 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
This would make it just as hard to convert programs from an
1100-series machine to a 360 as to a 90/30. The issue was the
1100's word-based architecture. Converting between 90/30 and
360, or earlier Univac 360-lookalikes, was fairly simple, at
least from the standpoint of data files.
It was hard converting RPG programs from the 9700 to 90/30.
I once converted a set of RPG programs from an IBM System/38
to OS/3. These were 3000-line interactive monsters that
proved that the world had forgotten what RPG stood for.
It was not fun.
Post by h***@bbs.cpcn.com
I don't know if object or load decks from the 90/30 would
run on a System/360,
Definitely not. Different format, different ABIs.
Post by h***@bbs.cpcn.com
but my guess is that recompiling would be easy.
Yup. A few system-specific features (particularly the H card),
but otherwise things were pretty much the same.
I once converted a fairly large assembly-language batch program
from OS/3 to DOS/VS. Aside from the different APIs (e.g. I/O),
it wasn't too difficult.
Where you would definitely get in trouble is in screen read/write.
That has to be the least compatible thing in the computer industry.
IBM z systems have a slew of ways to do screen I/O, none of them
compatible. Pics CICS, IMS/DC, ISPF, TSO, VTAM, they're all
different and not close to compatible.

Even in the Linux world we have Qt, Gtk, Tk, Curses.
This appears to be unsolved science too, Qt and Gtk keep coming out with
new semi-compatible versions.

So, I had to move a bunch of interactive stuff from IBM S/34 (similar to
the 38 mentioned above) to Wang/VS. I was able to take the S/34 source
screen definitions and write a compatible interpreter for Wang/VS.
The application was COBOL and it moved with very little work. If I had
to do the screens over it would have been a much bigger job.
--
Dan Espen
Scott Lurndal
2020-03-12 22:11:07 UTC
Reply
Permalink
Post by Dan Espen
Even in the Linux world we have Qt, Gtk, Tk, Curses.
This appears to be unsolved science too, Qt and Gtk keep coming out with
new semi-compatible versions.
All of which sit on top of standard X11 libraries. Which are all
that is really needed, Gtk+, Qt, etc. just add decorations. X11 Athena
widgets are sufficient, if not pretty.
Charlie Gibbs
2020-03-12 23:49:11 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Dan Espen
Even in the Linux world we have Qt, Gtk, Tk, Curses.
This appears to be unsolved science too, Qt and Gtk keep coming out with
new semi-compatible versions.
All of which sit on top of standard X11 libraries. Which are all
that is really needed, Gtk+, Qt, etc. just add decorations. X11 Athena
widgets are sufficient, if not pretty.
^^^^^^^^^^^^^^^^^^^^^^^^^
Those words have been the death sentence for many good pieces of software.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Ahem A Rivet's Shot
2020-03-13 08:36:29 UTC
Reply
Permalink
On Thu, 12 Mar 2020 22:11:07 GMT
Post by Scott Lurndal
Post by Dan Espen
Even in the Linux world we have Qt, Gtk, Tk, Curses.
This appears to be unsolved science too, Qt and Gtk keep coming out with
new semi-compatible versions.
All of which sit on top of standard X11 libraries.
Apart from curses which is all you ever need for text mode
applications.
Post by Scott Lurndal
Which are all
that is really needed, Gtk+, Qt, etc. just add decorations. X11 Athena
widgets are sufficient, if not pretty.
The 3D look versions aren't too bad.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Peter Flass
2020-03-11 21:52:21 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
This would make it just as hard to convert programs from an
1100-series machine to a 360 as to a 90/30. The issue was the
1100's word-based architecture. Converting between 90/30 and
360, or earlier Univac 360-lookalikes, was fairly simple, at
least from the standpoint of data files.
When I played with an 1100 UNIVAC supported _three_ COBOL compilers. There
was FIELDATA COBOL, ASCII COBOL, and something else I can’t recall now.
--
Pete
Scott Lurndal
2020-03-11 22:17:20 UTC
Reply
Permalink
Post by Peter Flass
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
This would make it just as hard to convert programs from an
1100-series machine to a 360 as to a 90/30. The issue was the
1100's word-based architecture. Converting between 90/30 and
360, or earlier Univac 360-lookalikes, was fairly simple, at
least from the standpoint of data files.
When I played with an 1100 UNIVAC supported _three_ COBOL compilers. There
was FIELDATA COBOL, ASCII COBOL, and something else I can’t recall now.
COBOL74?

Burroughs had both COBOL68 and COBOL74 compilers on medium systems, and I
believe they did COBOL85 for large systems.
Peter Flass
2020-03-11 23:47:35 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Peter Flass
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
This would make it just as hard to convert programs from an
1100-series machine to a 360 as to a 90/30. The issue was the
1100's word-based architecture. Converting between 90/30 and
360, or earlier Univac 360-lookalikes, was fairly simple, at
least from the standpoint of data files.
When I played with an 1100 UNIVAC supported _three_ COBOL compilers. There
was FIELDATA COBOL, ASCII COBOL, and something else I can’t recall now.
COBOL74?
Burroughs had both COBOL68 and COBOL74 compilers on medium systems, and I
believe they did COBOL85 for large systems.
It might have been ‘68. I think this was about 73 or 74.
--
Pete
Peter Flass
2020-03-11 21:52:20 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
I think Univac would've done better if they just marketed
it as a S/360 compatible box.
It was definitely a workalike. The 90/30's non-privileged
instruction set was bit-for-bit identical with that of the
360/50. The I/O architecture and most privileged instructions
were completely different, though.
Hmm...according to Wikipedia, the 90 series was a popular
and profitable product line for Univac. Apparently it
was derived from both the 9700 series and RCA's Spectra
series.
https://en.wikipedia.org/wiki/UNIVAC_Series_90
For customers, a key issue is cost. My guess was that
our 90/30 was approximately the same as a 370-135 but without
virtual storage. My guess is that Univac was cheaper, maybe
a lot cheaper than IBM.
But, as mentioned, that particular employer didn't seem very
sure of what the wanted to do on the machine beyond some
statistical reporting.
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
Spectras sound like pretty nice machines, I’m sorry I never got to work
with one.
--
Pete
r***@gmail.com
2020-03-12 01:17:42 UTC
Reply
Permalink
Post by Peter Flass
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
I think Univac would've done better if they just marketed
it as a S/360 compatible box.
It was definitely a workalike. The 90/30's non-privileged
instruction set was bit-for-bit identical with that of the
360/50. The I/O architecture and most privileged instructions
were completely different, though.
Hmm...according to Wikipedia, the 90 series was a popular
and profitable product line for Univac. Apparently it
was derived from both the 9700 series and RCA's Spectra
series.
https://en.wikipedia.org/wiki/UNIVAC_Series_90
For customers, a key issue is cost. My guess was that
our 90/30 was approximately the same as a 370-135 but without
virtual storage. My guess is that Univac was cheaper, maybe
a lot cheaper than IBM.
But, as mentioned, that particular employer didn't seem very
sure of what the wanted to do on the machine beyond some
statistical reporting.
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
Spectras sound like pretty nice machines, I’m sorry I never got to work
with one.
They were real-time machines, with 4 processor states,
each processor state had a selection of hardware registers,
therefore it was unnecessary to save registers when switching
states. Hence, fast.
The part that I liked was the emergency state. If a power failure
was imminent, the emergency state saved all registers and switched
off the computer before the power failure hit.
h***@bbs.cpcn.com
2020-03-12 22:03:30 UTC
Reply
Permalink
Post by Peter Flass
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
I think Univac would've done better if they just marketed
it as a S/360 compatible box.
It was definitely a workalike. The 90/30's non-privileged
instruction set was bit-for-bit identical with that of the
360/50. The I/O architecture and most privileged instructions
were completely different, though.
Hmm...according to Wikipedia, the 90 series was a popular
and profitable product line for Univac. Apparently it
was derived from both the 9700 series and RCA's Spectra
series.
https://en.wikipedia.org/wiki/UNIVAC_Series_90
For customers, a key issue is cost. My guess was that
our 90/30 was approximately the same as a 370-135 but without
virtual storage. My guess is that Univac was cheaper, maybe
a lot cheaper than IBM.
But, as mentioned, that particular employer didn't seem very
sure of what the wanted to do on the machine beyond some
statistical reporting.
As mentioned, they brought in a variety of COBOL programs
but none of them worked due to the byte/word incompatibility.
Spectras sound like pretty nice machines, I’m sorry I never got to work
with one.
I think the Spectra were S/360 compatible.

RCA lost a lot of money on them. I find that amazing in
that RCA was a large well established electronics company.
They were (or should've been) used to selling and
supporting complex equipment to commercial users,
such as radio and television broadcast equipment.
I'm not sure what they did to lose so much money.
I heard they had two separate key locations (Cherry
Hill, NJ, and Boston) that was quite cumbersome.

https://en.wikipedia.org/wiki/RCA_Spectra_70

http://bitsavers.org/pdf/rca/spectra70/70-00-601_Spectra70_System_Info_Man_Dec64.pdf

I always found it curious that RCA managed to
develop such a close copy of S/360 in such a short
period of time.
Quadibloc
2020-03-12 22:29:53 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
I always found it curious that RCA managed to
develop such a close copy of S/360 in such a short
period of time.
The I/O wasn't the same, it didn't use IBM peripherals.

RCA had experience in building computers.

Most System/360 models - and presumably _all_ the Spectra computers -
were microprogrammed.

So it's: build one computer, write one emulator.

The System/360 was not a hard computer to imitate, as computers go.

John Savard
r***@gmail.com
2020-03-13 02:38:09 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Peter Flass
Spectras sound like pretty nice machines, I’m sorry I never got to work
with one.
I think the Spectra were S/360 compatible.
There was one instruction missing OTOMH TS.
Post by h***@bbs.cpcn.com
RCA lost a lot of money on them. I find that amazing in
that RCA was a large well established electronics company.
They were (or should've been) used to selling and
supporting complex equipment to commercial users,
such as radio and television broadcast equipment.
I'm not sure what they did to lose so much money.
I heard they had two separate key locations (Cherry
Hill, NJ, and Boston) that was quite cumbersome.
https://en.wikipedia.org/wiki/RCA_Spectra_70
http://bitsavers.org/pdf/rca/spectra70/70-00-601_Spectra70_System_Info_Man_Dec64.pdf
I always found it curious that RCA managed to
develop such a close copy of S/360 in such a short
period of time.
IBM was compelled to make available to competitors
all the S/360 specifications as an outcome of the anti-trust settlement.
h***@bbs.cpcn.com
2020-03-13 17:18:23 UTC
Reply
Permalink
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
I always found it curious that RCA managed to
develop such a close copy of S/360 in such a short
period of time.
IBM was compelled to make available to competitors
all the S/360 specifications as an outcome of the anti-trust settlement.
Not sure that's accurate. IBM was required to license patents,
but not necessarily actual machine designs.

In any event, until its announcement, System/360 was a strictly
confidential project within IBM. So, it seems RCA was able
to move very fast upon receipt of instructions in 1964 to release
of their own machines in 1965.
Questor
2020-03-13 18:07:40 UTC
Reply
Permalink
RCA lost a lot of money on them. I find that amazing in=20
that RCA was a large well established electronics company.
They were (or should've been) used to selling and=20
supporting complex equipment to commercial users,
such as radio and television broadcast equipment.
I'm not sure what they did to lose so much money.
I heard they had two separate key locations (Cherry
Hill, NJ, and Boston) that was quite cumbersome.
Robert Glass devotes a chapter to RCA in "Computing Catastrophes" (1983).
Peter Flass
2020-03-13 18:43:45 UTC
Reply
Permalink
Post by Questor
RCA lost a lot of money on them. I find that amazing in=20
that RCA was a large well established electronics company.
They were (or should've been) used to selling and=20
supporting complex equipment to commercial users,
such as radio and television broadcast equipment.
I'm not sure what they did to lose so much money.
I heard they had two separate key locations (Cherry
Hill, NJ, and Boston) that was quite cumbersome.
Robert Glass devotes a chapter to RCA in "Computing Catastrophes" (1983).
Wow - $28.50 on Amazon.
--
Pete
a***@math.uni.wroc.pl
2020-06-12 01:34:56 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by Dan Espen
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
I told an IBM salesman once that IBM had so bollixed up the architecture
that our application programs were easily twice as large as they were on
the 14xx. He checked with his technical people and had to concede the
point.
Or maybe it was like the Univac salesmen who consistently low-balled
memory requirements to get a low bid. Many customers found themselves
buying memory expansions they were told they'd never need, after their
new installation bogged down. Eventually even the documentation admitted
to memory requirements of various packages being more than they originally
claimed.
Besides, the memory size limits on the various 360 processors were
largely self-imposed. At worst it took a bit of work with a soldering
iron to work around them. I once used a 360/30 with 128K of core; the
switch and indicator for the extra address bit had a definite home-brewed
look to them.
Greyhound computer bought up a lot of 360/30 processors that had come
off lease, and hung as much as 512K on them.
Hmm, I am not sure about 360/30, but 360/25 has 16-bit address
registers for chanels. Adding more memory would mean either serious
incompatiblity (I/O limited to low 64 kB) or microcode rewrite.

I have no direct info on structure of 360/30 chanel hardware,
but using 16-bit address registers was obvious cost-saving
measure, so I would expect similar problem as with 360/25.

Determined outside vendor could probably add a board with
extra hardware for registers and tie to it enough control
and data lines so that it worked. Or maybe add less
harware and patch microcode. But to make it compatible
was probably much more than some rework using soldering iron.
--
Waldek Hebisch
Quadibloc
2020-03-10 04:38:13 UTC
Reply
Permalink
Post by Dan Espen
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
So use BOS instead.

John Savard
Dan Espen
2020-03-10 16:11:47 UTC
Reply
Permalink
Post by Quadibloc
Post by Dan Espen
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
So use BOS instead.
Any idea on the run time storage use?
I can't really recall.

Once you got the disk/tape access stuff into memory I don't think
anything would be left in 8K. Certainly COBOL would be out of the question.
--
Dan Espen
h***@bbs.cpcn.com
2020-03-10 18:27:37 UTC
Reply
Permalink
Post by Dan Espen
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
Probably not much good at all, which is probably why they
later increased it.

Of course, maybe there were simpler operating systems.

Don't forget, the base 1401 was offered with as little as
1.2k (1,200 characters). I can't imagine anyone doing much
work with that, but apparently it was popular. It basically
functioned as a tabulator and I suppose even with that
little core it performed more sophisticated stuff.

Others have said the full 1401, at 16k, was a luxury
(though our hospital had that).
Post by Dan Espen
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
I told an IBM salesman once that IBM had so bollixed up the architecture
that our application programs were easily twice as large as they were on
the 14xx. He checked with his technical people and had to concede the
point.
I have been told by many that IBM lowballed bids to get the sale.
Most times the computer delivered needed later upgrades to work well.

I suspect there was considerable 'sticker shock' so maybe
low balling was necessary to make the sale. Core was one
way to lower the price.

I have no idea of pricing, but I suspect core wasn't cheap,
especially in the early years. I believe the IBM history
expected people not to order all that much and had bigger
orders than they expected, and thus had to ramp up core
production. It wasn't easy stringing the wires.

Indeed, even in the PC era, I think the IBM PC maxed at
640K and was offered in lower sizes to save money.
(Everyone I knew got 640k, but I guess some were smaller).
Dan Espen
2020-03-10 18:47:27 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Don't forget, the base 1401 was offered with as little as
1.2k (1,200 characters). I can't imagine anyone doing much
work with that, but apparently it was popular. It basically
functioned as a tabulator and I suppose even with that
little core it performed more sophisticated stuff.
I had a night job doing some coding on the smallest 1401.
No tape or disk, so you read in cards, printed a report
and sometimes punched new cards.

That tiny amount of core was all yours, no OS.
You could do quite a bit of logic in that space.

Wikipedia says 1.4, not 1.2. I remember it as 1.4.
--
Dan Espen
Rich Alderson
2020-03-11 19:05:13 UTC
Reply
Permalink
Post by Dan Espen
Post by h***@bbs.cpcn.com
Don't forget, the base 1401 was offered with as little as
1.2k (1,200 characters). I can't imagine anyone doing much
work with that, but apparently it was popular. It basically
functioned as a tabulator and I suppose even with that
little core it performed more sophisticated stuff.
I had a night job doing some coding on the smallest 1401.
No tape or disk, so you read in cards, printed a report
and sometimes punched new cards.
That tiny amount of core was all yours, no OS.
You could do quite a bit of logic in that space.
Wikipedia says 1.4, not 1.2. I remember it as 1.4.
Specifically, from the IBM documentation, 1401 characters. See what they did
there?
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Dan Espen
2020-03-11 20:33:33 UTC
Reply
Permalink
Post by Rich Alderson
Post by Dan Espen
Post by h***@bbs.cpcn.com
Don't forget, the base 1401 was offered with as little as
1.2k (1,200 characters). I can't imagine anyone doing much
work with that, but apparently it was popular. It basically
functioned as a tabulator and I suppose even with that
little core it performed more sophisticated stuff.
I had a night job doing some coding on the smallest 1401.
No tape or disk, so you read in cards, printed a report
and sometimes punched new cards.
That tiny amount of core was all yours, no OS.
You could do quite a bit of logic in that space.
Wikipedia says 1.4, not 1.2. I remember it as 1.4.
Specifically, from the IBM documentation, 1401 characters. See what they did
there?
Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.

I don't think you could put anything at 000.
--
Dan Espen
h***@bbs.cpcn.com
2020-03-12 21:57:39 UTC
Reply
Permalink
Post by Dan Espen
Post by Rich Alderson
Post by Dan Espen
That tiny amount of core was all yours, no OS.
You could do quite a bit of logic in that space.
Wikipedia says 1.4, not 1.2. I remember it as 1.4.
Specifically, from the IBM documentation, 1401 characters. See what they did
there?
Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.
I don't think you could put anything at 000.
Also, the 1401 had word marks, a special bit on each character.
With word marks, certain things like length didn't need to
be coded, saving space.
Dan Espen
2020-03-12 23:59:19 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Dan Espen
Post by Rich Alderson
Post by Dan Espen
That tiny amount of core was all yours, no OS.
You could do quite a bit of logic in that space.
Wikipedia says 1.4, not 1.2. I remember it as 1.4.
Specifically, from the IBM documentation, 1401 characters. See what they did
there?
Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.
I don't think you could put anything at 000.
Also, the 1401 had word marks, a special bit on each character.
With word marks, certain things like length didn't need to
be coded, saving space.
Those same word marks let you leave off operands in instructions
saving even more space.
--
Dan Espen
Peter Flass
2020-03-13 00:27:44 UTC
Reply
Permalink
Post by Dan Espen
Post by h***@bbs.cpcn.com
Post by Dan Espen
Post by Rich Alderson
Post by Dan Espen
That tiny amount of core was all yours, no OS.
You could do quite a bit of logic in that space.
Wikipedia says 1.4, not 1.2. I remember it as 1.4.
Specifically, from the IBM documentation, 1401 characters. See what they did
there?
Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.
I don't think you could put anything at 000.
Also, the 1401 had word marks, a special bit on each character.
With word marks, certain things like length didn't need to
be coded, saving space.
Those same word marks let you leave off operands in instructions
saving even more space.
Interesting instruction set. I think if you were moving several fields to
sequential locations you could leave off the destination addresses after
the first.
--
Pete
Dan Espen
2020-03-13 00:57:20 UTC
Reply
Permalink
Post by Peter Flass
Post by Dan Espen
Post by h***@bbs.cpcn.com
Post by Dan Espen
Post by Rich Alderson
Post by Dan Espen
That tiny amount of core was all yours, no OS.
You could do quite a bit of logic in that space.
Wikipedia says 1.4, not 1.2. I remember it as 1.4.
Specifically, from the IBM documentation, 1401 characters. See what they did
there?
Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.
I don't think you could put anything at 000.
Also, the 1401 had word marks, a special bit on each character.
With word marks, certain things like length didn't need to
be coded, saving space.
Those same word marks let you leave off operands in instructions
saving even more space.
Interesting instruction set. I think if you were moving several fields to
sequential locations you could leave off the destination addresses after
the first.
Wonderful instruction set.

If sending and receiving were adjacent, leave off both operands.
Fields did not need to be equal sized, you could roll totals into
larger sized fields, no problem.
Clearing accumulators that were adjacent was one instruction with
operands then as many subtracts as you had accumulators with no operands.

There was a trick with BCE (branch character equal) with character as an
immediate operand.

I think it was:

BCE WHERE,FIELD,CHAR
BCE CHAR2
BCE CHAR3

I think the machine was set to NOT decrement the B register for BCE
only so FIELD could be tested against more than one character.
--
Dan Espen
h***@bbs.cpcn.com
2020-03-13 17:15:41 UTC
Reply
Permalink
Post by Peter Flass
Post by Dan Espen
Post by h***@bbs.cpcn.com
Post by Dan Espen
Post by Rich Alderson
Post by Dan Espen
That tiny amount of core was all yours, no OS.
You could do quite a bit of logic in that space.
Wikipedia says 1.4, not 1.2. I remember it as 1.4.
Specifically, from the IBM documentation, 1401 characters. See what they did
there?
Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.
I don't think you could put anything at 000.
Also, the 1401 had word marks, a special bit on each character.
With word marks, certain things like length didn't need to
be coded, saving space.
Those same word marks let you leave off operands in instructions
saving even more space.
Interesting instruction set. I think if you were moving several fields to
sequential locations you could leave off the destination addresses after
the first.
Yes, the 1401 manual calls that "chaining".

"In some' programs, it becomes possible to perform a series of operations on several fields that are in consecutive storage locations. Some of the basic operations, such as ADD, SUBTRACf, MOVE, and LOAD, have the ability to be chained so that less time is required to perform the operations, and space is saved in storing instructions"
r***@gmail.com
2020-03-11 01:06:22 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Dan Espen
Post by Quadibloc
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was core
but slow core.)
In 1964, that was false advertising.
In 1965, the first System/360 machines were delivered.
There were adjustments to the initially announced lineup, so "Model
60" and "Model 70" were replaced by the model 65 and model 75.
So the question to ask is: was it possible to order a System/360
Model 75 with 8 megabytes of core storage?
Looking at 360/75 Functional Characteristics, the configurations
avalable are the 2075H, 2075I, and 2075J, with 256 Kbytes, 512
Kbytes, and 1 Mbyte respectively. This was 750 nanosecond memory, in
2365 Processor Storage modules.
Also, you could get 8 microsecond external 2631 Core Storage. Model 2
had a capacity of 2 Mb, and you could attach up to four of them.
So if you were willing to settle for 8 microsecond memory, yes, you
could have 8 megabytes of core on a system that you could have
ordered on the announcement date.
I don't have my S/360 history in front of me, but if memory serves,
the development of the auxiliary core storage went slowly and was not
a successful product (please correct me if I'm wrong).
Further, I think the amount of storage available for S/360 varied over
time. For instance, on the low end, I think the model 30 was offered
with only 8k but later upgraded to 16k. I suspect the large end had
similar situations. Anyway, I suspect the "Functional
Characteristics" for any given S/360 model would be significantly
revised over the years.
I never read Pugh's book on memory, which probably would explain a
lot.
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
Probably not much good at all, which is probably why they
later increased it.
Of course, maybe there were simpler operating systems.
Don't forget, the base 1401 was offered with as little as
1.2k (1,200 characters). I can't imagine anyone doing much
work with that, but apparently it was popular. It basically
functioned as a tabulator and I suppose even with that
little core it performed more sophisticated stuff.
Others have said the full 1401, at 16k, was a luxury
(though our hospital had that).
Post by Dan Espen
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
I told an IBM salesman once that IBM had so bollixed up the architecture
that our application programs were easily twice as large as they were on
the 14xx. He checked with his technical people and had to concede the
point.
I have been told by many that IBM lowballed bids to get the sale.
Most times the computer delivered needed later upgrades to work well.
Our S/360-50 came with 128K.
The FORTRAN H compiler needed 256K to run.
The PL/I compiler needed 64K.
The system shortly needed an upgrade of 1Mb of LCS core (slow core store).
But to use it effectively, MVT needed 256 of fast core -
after using virtually all of the 128K of fast core,
programs ran in slow core, with time penalty.
Post by h***@bbs.cpcn.com
I suspect there was considerable 'sticker shock' so maybe
low balling was necessary to make the sale. Core was one
way to lower the price.
I have no idea of pricing, but I suspect core wasn't cheap,
especially in the early years. I believe the IBM history
expected people not to order all that much and had bigger
orders than they expected, and thus had to ramp up core
production. It wasn't easy stringing the wires.
Indeed, even in the PC era, I think the IBM PC maxed at
640K and was offered in lower sizes to save money.
(Everyone I knew got 640k, but I guess some were smaller).
h***@bbs.cpcn.com
2020-03-11 19:32:49 UTC
Reply
Permalink
Post by Dan Espen
A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K.
According to a 1964 System Summary, the base offerings was only 8k.
https://www.cnn.com/2020/03/11/politics/national-review-donald-trump-coronavirus/index.html

The 1974 System Summary has a lot of revisions.
http://bitsavers.org/pdf/ibm/360/systemSummary/GA22-6810-12_360sysSumJan74.pdf
Post by Dan Espen
As those systems came out, there was some confusion. A shop I worked in
was running all their apps on 2 16K 14xx's. The 30 showed up with 32K.
It' didn't take long to realize that wasn't going to cut it. 32K was a
miniumum. If you wanted to run HLL compilers, you'd be better off with
64K. If you wanted to run more than one job at a time, 64K was the
minimum. In a few years, spoolers became common. Even one job at a
time required 64K.
I told an IBM salesman once that IBM had so bollixed up the architecture
that our application programs were easily twice as large as they were on
the 14xx. He checked with his technical people and had to concede the
point.
--
Dan Espen
Jon Elson
2020-03-12 02:23:23 UTC
Reply
Permalink
Post by Quadibloc
Also, you could get 8 microsecond external 2631 Core Storage. Model 2 had
a capacity of 2 Mb, and you could attach up to four of them.
We had a 360/50 with LCS, it was a DOG! It slowed the /50 to almost the
speed of a /30.

Some high-end installations used the LCS as a swapping store. That's about
the only use it actually made sense.

Jon
r***@gmail.com
2020-03-12 03:50:52 UTC
Reply
Permalink
Post by Jon Elson
Post by Quadibloc
Also, you could get 8 microsecond external 2631 Core Storage. Model 2 had
a capacity of 2 Mb, and you could attach up to four of them.
We had a 360/50 with LCS, it was a DOG! It slowed the /50 to almost the
speed of a /30.
Upthread I wrote that our S/360 initially had 128K, then enhanced
with LCS. The OS consumed virtually all of the 128K fast core,
leaving programs to run in LCS.
Things improved when HASP was installed, which read program cards
well in advance, and printed as fast as it could, using CCWs.
Post by Jon Elson
Some high-end installations used the LCS as a swapping store. That's about
the only use it actually made sense.
Peter Flass
2020-03-12 16:34:21 UTC
Reply
Permalink
Post by Jon Elson
Post by Quadibloc
Also, you could get 8 microsecond external 2631 Core Storage. Model 2 had
a capacity of 2 Mb, and you could attach up to four of them.
We had a 360/50 with LCS, it was a DOG! It slowed the /50 to almost the
speed of a /30.
Some high-end installations used the LCS as a swapping store. That's about
the only use it actually made sense.
sort of like what MVS/XA(?) did with expanded storage later. I’m a bit hazy
on details, but it used the “Move Page” instruction to swap blocks into and
out of main storage.
Post by Jon Elson
Jon
--
Pete
Peter Flass
2020-03-09 00:19:37 UTC
Reply
Permalink
Post by r***@gmail.com
Post by h***@bbs.cpcn.com
https://archive.org/details/Nations-Business-1964-12/page/n51
Available with up to 8 meg of memory!
(I thought S/360 could handle up to 16 meg? But maybe in those
days no one could see needing more than 8 meg. Indeed, I think
if you wanted that much you had to get the "LCS" which was
core but slow core.)
In 1964, that was false advertising.
The architecture could use up to 16MB, but at the time no real machine
could.
--
Pete
r***@gmail.com
2020-03-10 01:11:22 UTC
Reply
Permalink
"A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K."

But not every shop ran DOS or OS/360. There were 1) BPS, Basic Programming Support, which ran from cards 2) BOS, Basic Operating System, and 3) TOS, Tape Operating System. Whether either of the latter two was smaller than DOS, I know not. BPS was much smaller, but only provided an assembler and maybe an RPG.

For that matter, there was a variant of OS/360 which ran from cards, although I'm not sure what the point was. PCP, Primary Control Program, I think it was called, but I also think it wanted a minimum of 8k.

Bob Netzlof
Rich Alderson
2020-03-10 01:36:39 UTC
Reply
Permalink
Post by r***@gmail.com
"A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K."
But not every shop ran DOS or OS/360. There were 1) BPS, Basic Programming
Support, which ran from cards 2) BOS, Basic Operating System, and 3) TOS,
Tape Operating System. Whether either of the latter two was smaller than DOS,
I know not. BPS was much smaller, but only provided an assembler and maybe an
RPG.
BOS and TOS were smaller.
Post by r***@gmail.com
For that matter, there was a variant of OS/360 which ran from cards, although
I'm not sure what the point was. PCP, Primary Control Program, I think it was
called, but I also think it wanted a minimum of 8k.
PCP was used to run an OS/360 sysgen on bare metal, before bootable disk packs
were common.
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Dan Espen
2020-03-10 04:14:47 UTC
Reply
Permalink
Post by r***@gmail.com
"A 30 with 8K? What good would that be? A minimum DOS Sysgen would
use ALL of that, more likely it would gen at around 10-12K."
But not every shop ran DOS or OS/360. There were 1) BPS, Basic
Programming Support, which ran from cards
This 32K shop tried BPS first. I forget how much core it used, but with
disk and tape access, you'd still be fighting to get to get much
assembler in 8K.
Post by r***@gmail.com
2) BOS, Basic Operating System, and
Second thing we tried...with RPG. Couldn't get our existing reports to
work with RPG, it just didn't like detail information in it's headings.
COBOL would be out of the question in 8K. You could neither compile nor
run.
Post by r***@gmail.com
3) TOS, Tape Operating System.
Never used it, but I'm pretty sure I saw an operator demo it. I
remember tapes making really long forward and backward searches. Same
memory requirements as DOS.
Post by r***@gmail.com
Whether either of the latter two was smaller than DOS, I know not. BPS
was much smaller, but only provided an assembler and maybe an RPG.
For that matter, there was a variant of OS/360 which ran from cards,
although I'm not sure what the point was. PCP, Primary Control
Program, I think it was called, but I also think it wanted a minimum
of 8k.
Bob Netzlof
--
Dan Espen
Peter Flass
2020-03-10 19:29:58 UTC
Reply
Permalink
Post by r***@gmail.com
"A 30 with 8K? What good would that be? A minimum DOS Sysgen would use
ALL of that, more likely it would gen at around 10-12K."
But not every shop ran DOS or OS/360. There were 1) BPS, Basic
Programming Support, which ran from cards 2) BOS, Basic Operating System,
and 3) TOS, Tape Operating System. Whether either of the latter two was
smaller than DOS, I know not. BPS was much smaller, but only provided an
assembler and maybe an RPG.
For that matter, there was a variant of OS/360 which ran from cards,
although I'm not sure what the point was. PCP, Primary Control Program, I
think it was called, but I also think it wanted a minimum of 8k.
It could also be an RJE station, like a Model 20.
--
Pete
r***@gmail.com
2020-03-12 22:07:16 UTC
Reply
Permalink
"Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.

"I don't think you could put anything at 000."

Not so. Position 000 was the read stacker select code.
001 - 080 read area
100 - punch stacker select code
101 - 180 punch area
200 - carriage control character
201 - 220 print area (for standard 120 character printer)
201 - 232 print area (for extra cost 132 character printer)

If your program wasn't going to print anything, 200 and up were available for general use. Same for 000 thru 080 if you weren't reading cards, or had read some and weren't going to read any more.

Bob Netzlof
Dan Espen
2020-03-13 00:08:01 UTC
Reply
Permalink
Post by r***@gmail.com
"Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.
"I don't think you could put anything at 000."
Not so. Position 000 was the read stacker select code.
001 - 080 read area
100 - punch stacker select code
101 - 180 punch area
200 - carriage control character
201 - 220 print area (for standard 120 character printer)
201 - 232 print area (for extra cost 132 character printer)
If your program wasn't going to print anything, 200 and up were available for general use. Same for 000 thru 080 if you weren't reading cards, or had read some and weren't going to read any more.
Bob Netzlof
Goodness.
I must have known that at one time.
Not anymore apparently.

Well, now I'm confused, here is a description of the stacker select
instruction:

https://en.wikipedia.org/wiki/IBM_1401#Modifiers_for_Select_Stacker_(K)_instruction
--
Dan Espen
Peter Flass
2020-03-13 00:27:43 UTC
Reply
Permalink
Post by r***@gmail.com
"Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.
"I don't think you could put anything at 000."
Not so. Position 000 was the read stacker select code.
001 - 080 read area
100 - punch stacker select code
101 - 180 punch area
200 - carriage control character
201 - 220 print area (for standard 120 character printer)
201 - 232 print area (for extra cost 132 character printer)
If your program wasn't going to print anything, 200 and up were available
for general use. Same for 000 thru 080 if you weren't reading cards, or
had read some and weren't going to read any more.
Sounds like you’d want a lot of comments in the code if you were going to
do this!
--
Pete
Dan Espen
2020-03-13 00:46:45 UTC
Reply
Permalink
Post by Peter Flass
Post by r***@gmail.com
"Hmm, addressing started at 001, ended at I9I (15999).
Not sure how they'd get 1401 out of that.
"I don't think you could put anything at 000."
Not so. Position 000 was the read stacker select code.
001 - 080 read area
100 - punch stacker select code
101 - 180 punch area
200 - carriage control character
201 - 220 print area (for standard 120 character printer)
201 - 232 print area (for extra cost 132 character printer)
If your program wasn't going to print anything, 200 and up were available
for general use. Same for 000 thru 080 if you weren't reading cards, or
had read some and weren't going to read any more.
Sounds like you’d want a lot of comments in the code if you were going to
do this!
I think:

ORG 200

would be sufficient, maybe:

ORG 200 HEY, NO PRINTING IN THIS PROGRAM

I was always aware of the extra storage, never had to use it.
On the 1440, the one character op codes for read/punch/print were gone
and instead you specified the I/O area in the instruction.

Everyone still used 001-080, etc.

The 1443 supported 144 characters on a line so if you were used to:

ORG 333

you wanted to:

ORG 345

Unless you were fine with the start of your program being wiped
by the time printing started.
--
Dan Espen
r***@gmail.com
2020-03-13 01:21:45 UTC
Reply
Permalink
"Wonderful instruction set.

"If sending and receiving were adjacent, leave off both operands.
Fields did not need to be equal sized, you could roll totals into
larger sized fields, no problem.
Clearing accumulators that were adjacent was one instruction with
operands then as many subtracts as you had accumulators with no operands."

Indeed. I once had access to some IBM-written source code. Fortran II compiler perhaps. Anyway, the code carried instruction chaining into either a high art or a pathology. I don't recall anything exactly, of course, but I came on a sequence something like:
C THIS,THAT
S
A
A
M
S
M
BE SOMEWHERE

in which C = compare, A = add, S = subtract, M = move, BE = branch if equal and the if equal part depended on what the compare way back at the beginning saw.

Recall also that the I/O op codes were 1 = read, 2 = punch, 4 = print. So, you could load up the punch area with data to be punched, set up the print area with a line to print, then execute 7 which printed, punched, and read a card into the read area.

Note that the carriage control character in the position before the print area lived on into 360 days.
Loading...