Discussion:
S/360 model 20
Add Reply
Dan Espen
2020-08-30 04:27:44 UTC
Reply
Permalink
So, I hesitate to say anything, there's no reason for animosity.

Someone said a Model 20 could be 32K another said 16K.
I checked Wikipedia, they say 32K.
I accessed Functional Characteristics using the Wikipedia link.
That gets the -01 version of the manual.

On page 1, it lists the storage sizes:

Main Storage consists of 4,096; 8,192; 12,288; and
16,384 positions of magnetic core storage.

I'm guessing the 32K is accurate and was implemented in later models.

(See how confrontation is uncalled for.)

As to whether disk was common, I'm going to take a wild guess and
say that disk was initially uncommon but later might have been more
common. The system I worked on was card only.

Disk would make more sense on the 32K model since the owner could afford
to be using the OS with 32K.

It's interesting that the 2311 disk which was also sold for other S/360
models has fixed sector sizes on the Model 20.

I always thought the variable block sizes on S/360 was a mistake.
It put too much complexity into user space. A pity IBM didn't sectorize
all of it's disk on all of it's models from the beginning.

Oh boy, I expressed another opinion, now someone is going to call me an
idiot. Oh well.
--
Dan Espen
John Levine
2020-08-30 18:34:21 UTC
Reply
Permalink
Post by Dan Espen
Main Storage consists of 4,096; 8,192; 12,288; and
16,384 positions of magnetic core storage.
I'm guessing the 32K is accurate and was implemented in later models.
Models 1,2,3,4 were limited to 16K, model 5 which was different
internally and somewhat faster could go to 32. I never saw a model 5.
Post by Dan Espen
It's interesting that the 2311 disk which was also sold for other S/360
models has fixed sector sizes on the Model 20.
I always thought the variable block sizes on S/360 was a mistake.
It put too much complexity into user space. A pity IBM didn't sectorize
all of it's disk on all of it's models from the beginning.
I think it was all part of the expensive memory mindset when the 360
was designed. With CKD disks, the ISAM in-memory index only needed one
entry per track, or even one per cylinder and the channel program took
care of finding the right individual record. With the 20's disks, the
track index told it which track a record was on, but it had to read
each sector to find the right one, taking a 25ms disk revolution each
time.

Less than a decade later VSAM KSDS used B-trees which use fixed block
sizes and put the index in the blocks themselves, on the assumption
that the computer will cache recently used blocks.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Dan Espen
2020-08-30 19:51:59 UTC
Reply
Permalink
Post by John Levine
Post by Dan Espen
Main Storage consists of 4,096; 8,192; 12,288; and
16,384 positions of magnetic core storage.
I'm guessing the 32K is accurate and was implemented in later models.
Models 1,2,3,4 were limited to 16K, model 5 which was different
internally and somewhat faster could go to 32. I never saw a model 5.
Post by Dan Espen
It's interesting that the 2311 disk which was also sold for other S/360
models has fixed sector sizes on the Model 20.
I always thought the variable block sizes on S/360 was a mistake.
It put too much complexity into user space. A pity IBM didn't sectorize
all of it's disk on all of it's models from the beginning.
I think it was all part of the expensive memory mindset when the 360
was designed. With CKD disks, the ISAM in-memory index only needed one
entry per track, or even one per cylinder and the channel program took
care of finding the right individual record.
...

I started to write about how you could do something similar with sectors,
but you're absolutely right. For the channel to search for the data in
the sector it would have to understand a lot about the file to know what
to match.
Post by John Levine
With the 20's disks, the
track index told it which track a record was on, but it had to read
each sector to find the right one, taking a 25ms disk revolution each
time.
Less than a decade later VSAM KSDS used B-trees which use fixed block
sizes and put the index in the blocks themselves, on the assumption
that the computer will cache recently used blocks.
I found the variable block sizes a real problem.
If you pick a really large block size for the benefit of one
program, all the other programs reading that file must use the
same block size. Which could be a problem if a program is storage
constrained. Back in the 14xx days one program might read one sector
at a time, another might attempt to read the whole cylinder.

It looks like IBM has tried to adapt using 4K blocks with VSAM and
PDS/E.
--
Dan Espen
John Levine
2020-08-30 22:07:58 UTC
Reply
Permalink
Post by Dan Espen
I found the variable block sizes a real problem.
If you pick a really large block size for the benefit of one
program, all the other programs reading that file must use the
same block size. Which could be a problem if a program is storage
constrained. Back in the 14xx days one program might read one sector
at a time, another might attempt to read the whole cylinder.
It looks like IBM has tried to adapt using 4K blocks with VSAM and PDS/E.
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving
data back and forth between disk and main memory as needed.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Dan Espen
2020-08-30 22:32:02 UTC
Reply
Permalink
Post by John Levine
Post by Dan Espen
I found the variable block sizes a real problem.
If you pick a really large block size for the benefit of one
program, all the other programs reading that file must use the
same block size. Which could be a problem if a program is storage
constrained. Back in the 14xx days one program might read one sector
at a time, another might attempt to read the whole cylinder.
It looks like IBM has tried to adapt using 4K blocks with VSAM and PDS/E.
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving
data back and forth between disk and main memory as needed.
Yes, I was going to try to get into that.
As I understand it, IBM has an array of buffering techniques in play,
DB2 has something, IMS has something, they're all over the place.

Unix deals with it at a single level, the entire filesystem gets cached
as best it can.

I don't know if IBM is trying to consciously move toward that or not.
--
Dan Espen
J. Clarke
2020-08-30 22:38:21 UTC
Reply
Permalink
Post by Dan Espen
Post by John Levine
Post by Dan Espen
I found the variable block sizes a real problem.
If you pick a really large block size for the benefit of one
program, all the other programs reading that file must use the
same block size. Which could be a problem if a program is storage
constrained. Back in the 14xx days one program might read one sector
at a time, another might attempt to read the whole cylinder.
It looks like IBM has tried to adapt using 4K blocks with VSAM and PDS/E.
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving
data back and forth between disk and main memory as needed.
Yes, I was going to try to get into that.
As I understand it, IBM has an array of buffering techniques in play,
DB2 has something, IMS has something, they're all over the place.
Unix deals with it at a single level, the entire filesystem gets cached
as best it can.
I don't know if IBM is trying to consciously move toward that or not.
There's also the matter that 4K is the physical sector size on
virtually all disks currently in production.

Seagate provides a discussion of that transition here:
<https://www.seagate.com/tech-insights/advanced-format-4k-sector-hard-drives-master-ti/>
Dan Espen
2020-08-31 00:37:17 UTC
Reply
Permalink
Post by J. Clarke
Post by Dan Espen
Post by John Levine
Post by Dan Espen
I found the variable block sizes a real problem.
If you pick a really large block size for the benefit of one
program, all the other programs reading that file must use the
same block size. Which could be a problem if a program is storage
constrained. Back in the 14xx days one program might read one sector
at a time, another might attempt to read the whole cylinder.
It looks like IBM has tried to adapt using 4K blocks with VSAM and PDS/E.
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving
data back and forth between disk and main memory as needed.
Yes, I was going to try to get into that.
As I understand it, IBM has an array of buffering techniques in play,
DB2 has something, IMS has something, they're all over the place.
Unix deals with it at a single level, the entire filesystem gets cached
as best it can.
I don't know if IBM is trying to consciously move toward that or not.
There's also the matter that 4K is the physical sector size on
virtually all disks currently in production.
<https://www.seagate.com/tech-insights/advanced-format-4k-sector-hard-drives-master-ti/>
I'm reading this and thinking, hey this is pretty interesting.

Then I realize my last hard drive went out of service 2 or 3 years ago.

Hard drive, meet punched card.
--
Dan Espen
Bob Eager
2020-08-31 08:54:33 UTC
Reply
Permalink
Post by John Levine
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving data
back and forth between disk and main memory as needed.
I was working on a system that did that in the 1970s. It used a 4kB page
size.

Not widely used, though. (and not MULTICS)
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
John Levine
2020-08-31 16:36:13 UTC
Reply
Permalink
Post by John Levine
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving data
back and forth between disk and main memory as needed.
I was working on a system that did that in the 1970s. It used a 4kB page size.
Not widely used, though. (and not MULTICS)
Yeah, I used TSS/360 too. A lovable dog.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Scott Lurndal
2020-08-31 16:57:21 UTC
Reply
Permalink
Post by John Levine
Post by John Levine
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving data
back and forth between disk and main memory as needed.
I was working on a system that did that in the 1970s. It used a 4kB page size.
Not widely used, though. (and not MULTICS)
Yeah, I used TSS/360 too. A lovable dog.
At Burroughs we called CANDE (time sharing subsystem) "Batch with a Patch".
Bob Eager
2020-08-31 17:21:50 UTC
Reply
Permalink
Post by John Levine
Post by John Levine
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving
data back and forth between disk and main memory as needed.
I was working on a system that did that in the 1970s. It used a 4kB page size.
Not widely used, though. (and not MULTICS)
Yeah, I used TSS/360 too. A lovable dog.
No, not TSS/360. Not even IBM.

(TSS/8 is fun though)
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Scott Lurndal
2020-08-31 18:10:45 UTC
Reply
Permalink
Post by Bob Eager
Post by John Levine
Post by John Levine
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving
data back and forth between disk and main memory as needed.
I was working on a system that did that in the 1970s. It used a 4kB page size.
Not widely used, though. (and not MULTICS)
Yeah, I used TSS/360 too. A lovable dog.
No, not TSS/360. Not even IBM.
TSS 8.24 (4kw pages and all) was much more pleasant to use when compared with TSS/360. Heck,
Wylbur was much more pleasant to use than TSS/360.

PAL-D on TSS 8.24 was my introduction to assembler programming.
Quadibloc
2020-08-31 23:10:57 UTC
Reply
Permalink
Post by Bob Eager
(TSS/8 is fun though)
And then there's OS/8, with 6.2 file names.

And a PIP command, just like CP/M.

Of course, there's no money in suing Digital Research. But if Hewlett-Packard
could wind up owning a chunk of Microsoft...

John Savard
Bob Eager
2020-09-01 14:14:42 UTC
Reply
Permalink
Post by Quadibloc
Post by Bob Eager
(TSS/8 is fun though)
And then there's OS/8, with 6.2 file names.
And a PIP command, just like CP/M.
Of course, there's no money in suing Digital Research. But if
Hewlett-Packard could wind up owning a chunk of Microsoft...
Indeed. I mentioned that stuff in a recent talk on the PDP-8.

But the original system I was referring to (with 4kB pages) was none of
these.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Bill Findlay
2020-09-01 16:41:22 UTC
Reply
Permalink
Post by Bob Eager
Post by Quadibloc
Post by Bob Eager
(TSS/8 is fun though)
And then there's OS/8, with 6.2 file names.
And a PIP command, just like CP/M.
Of course, there's no money in suing Digital Research. But if
Hewlett-Packard could wind up owning a chunk of Microsoft...
Indeed. I mentioned that stuff in a recent talk on the PDP-8.
But the original system I was referring to (with 4kB pages) was none of
these.
Ee, MAn, Such a tease!
--
Bill Findlay
Bob Eager
2020-09-02 08:34:27 UTC
Reply
Permalink
On 1 Sep 2020, Bob Eager wrote (in article
Post by Bob Eager
Post by Quadibloc
Post by Bob Eager
(TSS/8 is fun though)
And then there's OS/8, with 6.2 file names.
And a PIP command, just like CP/M.
Of course, there's no money in suing Digital Research. But if
Hewlett-Packard could wind up owning a chunk of Microsoft...
Indeed. I mentioned that stuff in a recent talk on the PDP-8.
But the original system I was referring to (with 4kB pages) was none of
these.
Ee, MAn, Such a tease!
OK...it was NOT like TOPS-20, but it was like MULTICS.

I worked on it for about eight years, including a large amount of work
porting it to XA architecture.

But there were not many instances.

All disk I/O was unified with paging, and files were mapped into memory
(the only way to access them). Other I/O used a related model where the
buffers worked like that. 4kB pages because that was determined to be
optimal (the hardware pages were grouped to get 4kB ones).

Start here (more in the same place):
http://www.ancientgeek.org.uk/EMAS/EMAS_Papers/
The_EMAS_2900_Operating_System.pdf

or:

https://tinyurl.com/yyms23z6
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Questor
2020-09-02 07:04:27 UTC
Reply
Permalink
Post by Bob Eager
Post by John Levine
Not at all by coincidence, 4K is also the virtual memory page size.
Modern operating systems treat file I/O and paging together, moving data
back and forth between disk and main memory as needed.
I was working on a system that did that in the 1970s. It used a 4kB page
size.
Not widely used, though. (and not MULTICS)
TOPS-20 would be another. Core, swap space, disk -- managed with pointers.
There are cases where it isn't necessary to do the actual I/O -- the pointer in
some table just gets changed. IIRC, page size is 1000(8), or 512 36-bit words.
The algorithms are documented in the TOPS-20 Monitor Tables and TOPS-20 Monitor
Internals documents, archived in the usual repositories. AIUI, the Linux kernel
adopted a similar model.
Charlie Gibbs
2020-09-01 04:40:40 UTC
Reply
Permalink
Post by John Levine
Post by Dan Espen
I always thought the variable block sizes on S/360 was a mistake.
It put too much complexity into user space. A pity IBM didn't sectorize
all of it's disk on all of it's models from the beginning.
I think it was all part of the expensive memory mindset when the 360
was designed.
In _The Mythical Man Month_, Fred Brooks discusses the decision to
save 100 bytes by omitting leap year code from the supervisor.
Post by John Levine
With CKD disks, the ISAM in-memory index only needed one entry per
track, or even one per cylinder and the channel program took care
of finding the right individual record.
It wasn't just memory that was scarce. CPU cycles were almost as
precious, and the CKD architecture and its search commands allowed
processing to be offloaded onto the channel. It made sense at the
time, even though it doesn't now.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Quadibloc
2020-09-01 05:10:59 UTC
Reply
Permalink
Post by Charlie Gibbs
In _The Mythical Man Month_, Fred Brooks discusses the decision to
save 100 bytes by omitting leap year code from the supervisor.
Do you mean that they omitted all leap year code, so as to have a problem in 1968,
or just the fancier adjustments so as to have a problem in 2100?

John Savard
Charlie Gibbs
2020-09-01 19:09:57 UTC
Reply
Permalink
Post by Quadibloc
Post by Charlie Gibbs
In _The Mythical Man Month_, Fred Brooks discusses the decision to
save 100 bytes by omitting leap year code from the supervisor.
Do you mean that they omitted all leap year code, so as to have a problem
in 1968, or just the fancier adjustments so as to have a problem in 2100?
The former. They figured that they could afford to have the operator
correct the clock once every four years.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Dan Espen
2020-09-01 19:41:38 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by Quadibloc
Post by Charlie Gibbs
In _The Mythical Man Month_, Fred Brooks discusses the decision to
save 100 bytes by omitting leap year code from the supervisor.
Do you mean that they omitted all leap year code, so as to have a problem
in 1968, or just the fancier adjustments so as to have a problem in 2100?
The former. They figured that they could afford to have the operator
correct the clock once every four years.
I read stories on IBM Main about correcting the clock twice a year
for daylight savings. This was fairly recently.
--
Dan Espen
Jon Elson
2020-09-03 15:44:15 UTC
Reply
Permalink
Post by Dan Espen
Post by Charlie Gibbs
Post by Quadibloc
Post by Charlie Gibbs
In _The Mythical Man Month_, Fred Brooks discusses the decision to
save 100 bytes by omitting leap year code from the supervisor.
Do you mean that they omitted all leap year code, so as to have a
problem in 1968, or just the fancier adjustments so as to have a problem
in 2100?
The former. They figured that they could afford to have the operator
correct the clock once every four years.
I read stories on IBM Main about correcting the clock twice a year
for daylight savings. This was fairly recently.
Most 360 (as opposed to 370) installations re-IPL'ed several times a day,
requiring the clock to be set each time. So, not that much of a big deal.

Jon
Dan Espen
2020-09-03 15:51:32 UTC
Reply
Permalink
Post by Jon Elson
Post by Dan Espen
Post by Charlie Gibbs
Post by Quadibloc
Post by Charlie Gibbs
In _The Mythical Man Month_, Fred Brooks discusses the decision to
save 100 bytes by omitting leap year code from the supervisor.
Do you mean that they omitted all leap year code, so as to have a
problem in 1968, or just the fancier adjustments so as to have a problem
in 2100?
The former. They figured that they could afford to have the operator
correct the clock once every four years.
I read stories on IBM Main about correcting the clock twice a year
for daylight savings. This was fairly recently.
Most 360 (as opposed to 370) installations re-IPL'ed several times a day,
requiring the clock to be set each time. So, not that much of a big deal.
Last place I worked avoided IPL like the plague.
They were special scheduled for the weekend.
This was development support, not production.
Well, a small bit of production on one LPAR.

We had issues with date/time needing to be in ascending order in logs.
During fall back they'd just turn the machine off for an hour.
--
Dan Espen
Niklas Karlsson
2020-09-03 16:41:27 UTC
Reply
Permalink
Post by Dan Espen
We had issues with date/time needing to be in ascending order in logs.
During fall back they'd just turn the machine off for an hour.
Some years ago, I supported an application with essentially the same
problem, and the same solution. Whoever was on call had to stay up late
and shut the app down, then bring it up an hour later.

I seem to recall it ended up being me every time, for the duration I
worked there. All this could have been avoided if they had just used UTC
timestamps.

Niklas
--
A few minutes ago I attempted to give a flying fsck, but the best I
could do was to watch it skitter across the floor. -- Anthony de Boer
J. Clarke
2020-09-03 16:46:22 UTC
Reply
Permalink
Post by Jon Elson
Post by Dan Espen
Post by Charlie Gibbs
Post by Quadibloc
Post by Charlie Gibbs
In _The Mythical Man Month_, Fred Brooks discusses the decision to
save 100 bytes by omitting leap year code from the supervisor.
Do you mean that they omitted all leap year code, so as to have a
problem in 1968, or just the fancier adjustments so as to have a problem
in 2100?
The former. They figured that they could afford to have the operator
correct the clock once every four years.
I read stories on IBM Main about correcting the clock twice a year
for daylight savings. This was fairly recently.
Most 360 (as opposed to 370) installations re-IPL'ed several times a day,
requiring the clock to be set each time. So, not that much of a big deal.
Our guys re-IPL every Saturday night for some reason.
Dennis Boone
2020-09-03 18:32:48 UTC
Reply
Permalink
Post by J. Clarke
Our guys re-IPL every Saturday night for some reason.
In mainframe culture, that's often considered a good idea: it makes sure
that changes that broke something don't get to be old un-remembered
changes before the problems are discovered. Also helps keep ops current
on procedures, etc.

In internet culture where everything has to be up 25 hours a day, 367
days per year, 110% reliable, etc, nobody understands.

De
Dan Espen
2020-09-03 18:44:37 UTC
Reply
Permalink
Post by Dennis Boone
Post by J. Clarke
Our guys re-IPL every Saturday night for some reason.
In mainframe culture, that's often considered a good idea: it makes sure
that changes that broke something don't get to be old un-remembered
changes before the problems are discovered. Also helps keep ops current
on procedures, etc.
In internet culture where everything has to be up 25 hours a day, 367
days per year, 110% reliable, etc, nobody understands.
In internet culture they develop a way to replace a kernel without a reboot.
ksplice.

Not to take away from mainframes but some of this open source stuff is
very cool.
--
Dan Espen
John Levine
2020-09-03 22:28:06 UTC
Reply
Permalink
Post by Dan Espen
Post by Dennis Boone
In internet culture where everything has to be up 25 hours a day, 367
days per year, 110% reliable, etc, nobody understands.
In internet culture they develop a way to replace a kernel without a reboot.
ksplice.
That's more about software than hardware. TPF mainframe systems have
stayed up over a decade without a reboot, through multiple hardware
and software upgrades.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
h***@bbs.cpcn.com
2020-09-03 18:09:42 UTC
Reply
Permalink
Post by Charlie Gibbs
The former. They figured that they could afford to have the operator
correct the clock once every four years.
Our S/360 operator had to enter the date and time every day.
Wasn't a big deal. Usually got it right.

Somewhere along the line they got direct connections to
time sources.

Many years ago WU supplied a time signal and clocks.

As an aside, my public schooling had IBM clocks. They
normally worked ok. But in the change back and forth
with Daylight Savings Time, the clocks went haywire.
It always took a few days to get them back normal.

https://archive.org/details/Nations-Business-1952-05/page/n103/mode/1up

IBM dumped its clock division in the late 1950s.
Peter Flass
2020-09-03 18:48:46 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
The former. They figured that they could afford to have the operator
correct the clock once every four years.
Our S/360 operator had to enter the date and time every day.
Wasn't a big deal. Usually got it right.
Somewhere along the line they got direct connections to
time sources.
I seem to recall that there was some third-party device that (somehow) set
the system clock for the operator at IPL. No idea how it worked of if they
actually sold any.

For a long time we IPLd every week after third shift Sunday night. A lot of
changes needed an IPL to take effect. This is still a problem with some
Linux software. It’s probably a good idea to reboot before you forget what
you did to the system.
Post by h***@bbs.cpcn.com
Many years ago WU supplied a time signal and clocks.
As an aside, my public schooling had IBM clocks. They
normally worked ok. But in the change back and forth
with Daylight Savings Time, the clocks went haywire.
It always took a few days to get them back normal.
https://archive.org/details/Nations-Business-1952-05/page/n103/mode/1up
IBM dumped its clock division in the late 1950s.
--
Pete
Charlie Gibbs
2020-09-03 19:49:30 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
The former. They figured that they could afford to have the operator
correct the clock once every four years.
Our S/360 operator had to enter the date and time every day.
Wasn't a big deal. Usually got it right.
Slightly off topic (this is a.f.c after all), I remember a site
where the receptionist didn't like 24-hour time. Every day at
1 p.m., when the time display changed to 13:00, she'd change it
to 1:00. When she came in the next morning, the display would
be reading 21:00, so she'd change it back to 9:00. As a result,
the only time that the switch would see a midnight crossing was
on weekends, when the switch was unmanned. Therefore, the date
in the switch (and in the call records it generated) would only
advance by two days a week.
Post by h***@bbs.cpcn.com
Somewhere along the line they got direct connections to
time sources.
Many years ago WU supplied a time signal and clocks.
I remember reading about a radio that would tune in WWV or
one of its brethren, and decode the time signal to provide
a digital output.
Post by h***@bbs.cpcn.com
As an aside, my public schooling had IBM clocks. They
normally worked ok. But in the change back and forth
with Daylight Savings Time, the clocks went haywire.
It always took a few days to get them back normal.
I remember that from my school days. I think "spring ahead"
was easier because the clocks would step ahead by one minute
every second, so in a minute the adjustment was complete.
Going the other way was trickier - maybe they lost count.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Dave Garland
2020-09-04 15:06:41 UTC
Reply
Permalink
Post by Charlie Gibbs
I remember reading about a radio that would tune in WWV or
one of its brethren, and decode the time signal to provide
a digital output.
When I had a BBS, I had a program that would dial WWV or maybe NIST
once a week (the time signal was available by phone as well as radio),
estimate the latency (half of round-trip time), and get the time. It
tracked the computer clock drift, and the rest of the week would make
little adjustments to compensate.

Never did understand why computer clocks couldn't be at least as
accurate as a $5 Chinese watch.
Charlie Gibbs
2020-09-04 16:13:49 UTC
Reply
Permalink
Post by Dave Garland
Never did understand why computer clocks couldn't be at least as
accurate as a $5 Chinese watch.
Nobody cared. And now, with ntp, nobody has to.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
J. Clarke
2020-09-03 21:30:53 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
The former. They figured that they could afford to have the operator
correct the clock once every four years.
Our S/360 operator had to enter the date and time every day.
Wasn't a big deal. Usually got it right.
Somewhere along the line they got direct connections to
time sources.
Many years ago WU supplied a time signal and clocks.
As an aside, my public schooling had IBM clocks. They
normally worked ok. But in the change back and forth
with Daylight Savings Time, the clocks went haywire.
It always took a few days to get them back normal.
https://archive.org/details/Nations-Business-1952-05/page/n103/mode/1up
IBM dumped its clock division in the late 1950s.
Funny thing--where I work now they have manually adjusted clocks--I've
seen the clock guy walk up with his ladder, open the clock, and make
an adjustment, then go down the hall to the next clock . . .

It's not that the place isn't progressive about such things--they had
a robot delivering mail in the '70s (the robot finally died of old age
and lack of spares--it is missed).

On the other hand, another place I was working, in the '70s, I
remember it was 4:25 on a Friday afternoon and I was beat. I was
walking back to my desk to drop off the paperwork I was carrying, and
watching the clocks as I went down the hall, enjoying seeing the time
creep up on 4:30. So it's 4:29:30, and the damned clock BACKS UP half
an hour.

None of those were IBM though.
John Levine
2020-09-03 22:55:09 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
The former. They figured that they could afford to have the operator
correct the clock once every four years.
Our S/360 operator had to enter the date and time every day.
Wasn't a big deal. Usually got it right.
Around 1970, Princeton University had a 360/91 running OS, which
crashed rebooted all the times, often several times a day. A clever
engineering student built a gizmo he called a TOAD for Time Of Any
Day, which looked to the /91 like a terminal that typed in the clock
setting command as it was starting up. It was a box with a picture of
a toad with lightbulbs for its eyes that lit up on the guy's birthday.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Robin Vowels
2020-09-04 02:44:42 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
The former. They figured that they could afford to have the operator
correct the clock once every four years.
Our S/360 operator had to enter the date and time every day.
Wasn't a big deal. Usually got it right.
Somewhere along the line they got direct connections to
time sources.
Many years ago WU supplied a time signal and clocks.
As an aside, my public schooling had IBM clocks. They
normally worked ok. But in the change back and forth
with Daylight Savings Time, the clocks went haywire.
It always took a few days to get them back normal.
IBM dumped its clock division in the late 1950s.
It did? Our building, new in 1969, had IBM clocks.
If they had been switched off, they would have told the time
correctly twice a day -- which would have been more
times a day that the clocks were correct when running.
Quadibloc
2020-09-05 06:27:45 UTC
Reply
Permalink
Post by Robin Vowels
It did? Our building, new in 1969, had IBM clocks.
If they had been switched off, they would have told the time
correctly twice a day -- which would have been more
times a day that the clocks were correct when running.
They might have been correct more often, but they would still have provided less
useful information about what time it was.

John Savard
h***@bbs.cpcn.com
2020-09-01 18:03:01 UTC
Reply
Permalink
Post by Charlie Gibbs
In _The Mythical Man Month_, Fred Brooks discusses the decision to
save 100 bytes by omitting leap year code from the supervisor.
People forget how much effort was made to save a byte
here, a byte there since core and disk space were so
limited. Likewise with CPU cycles. In the early
years of S/360, I believe most programming was done
in assembler for that reason. Assembler had tricks
with binary that would save space and time.
Post by Charlie Gibbs
Post by John Levine
With CKD disks, the ISAM in-memory index only needed one entry per
track, or even one per cylinder and the channel program took care
of finding the right individual record.
It wasn't just memory that was scarce. CPU cycles were almost as
precious, and the CKD architecture and its search commands allowed
processing to be offloaded onto the channel. It made sense at the
time, even though it doesn't now.
Oh yes.
Jon Elson
2020-09-03 15:42:47 UTC
Reply
Permalink
Post by Charlie Gibbs
It wasn't just memory that was scarce. CPU cycles were almost as
precious, and the CKD architecture and its search commands allowed
processing to be offloaded onto the channel. It made sense at the
time, even though it doesn't now.
Well, it really stopped making sense as soon as you had a disk-resident
multiprogamming OS. On the 360/30, the 360 was stopped when a channel
program (I assume that means selector channel) was running. So, the whole
idea of saving the CPU from disk search processing was a fallacy on that
model.

And, on the 360/50 and above, having the whole selector channel/control
unit/string of disk drives be totally locked out when doing a record key
search of a few cylinders was obviously not a great idea, the entire system
would grind to a halt very quickly.

I don't think the designers of the 360 had any idea how central to system
operation disks would quickly become. They were still in the 709x
generation when they planned this all out.

Jon
h***@bbs.cpcn.com
2020-09-03 18:12:37 UTC
Reply
Permalink
Post by Jon Elson
I don't think the designers of the 360 had any idea how central to system
operation disks would quickly become. They were still in the 709x
generation when they planned this all out.
Yes. The S/360 history explains how they sold more disk
drives and especially many more disk packs than anticipated.

Several key IBM engineers quit the company and went into
business for themselves or for other computer makers.
Given the high demand for disk space, there was a market
for disk clones.

I don't think IBM makes disks anymore, mainframe users
have to get them from somewhere else. Don't know why.
Peter Flass
2020-09-03 18:48:48 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Jon Elson
I don't think the designers of the 360 had any idea how central to system
operation disks would quickly become. They were still in the 709x
generation when they planned this all out.
Yes. The S/360 history explains how they sold more disk
drives and especially many more disk packs than anticipated.
Several key IBM engineers quit the company and went into
business for themselves or for other computer makers.
Given the high demand for disk space, there was a market
for disk clones.
I don't think IBM makes disks anymore, mainframe users
have to get them from somewhere else. Don't know why.
I don’t think IBM makes much of anything any more, They seem to be trying
to turn themselves into a service company. This is like Unisys, who don’t
seem to want to admit they ever made hardware. It takes a search by a
bloodhound to find any manuals on their site.
--
Pete
Charlie Gibbs
2020-09-03 19:49:30 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Jon Elson
I don't think the designers of the 360 had any idea how central to system
operation disks would quickly become. They were still in the 709x
generation when they planned this all out.
Yes. The S/360 history explains how they sold more disk
drives and especially many more disk packs than anticipated.
Several key IBM engineers quit the company and went into
business for themselves or for other computer makers.
Given the high demand for disk space, there was a market
for disk clones.
The same was true with Univac in the late '70s. A lot of sites
went to Control Data, who saw a market opening. Univac made all
sorts of threats about using CDC packs, which nonetheless worked
just fine. Since Univac couldn't meet the demand, they had only
themselves to blame (although it was fun to watch the finger-pointing
if there was a head crash).
Post by h***@bbs.cpcn.com
I don't think IBM makes disks anymore, mainframe users
have to get them from somewhere else. Don't know why.
Not enough money in it, perhaps?
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
h***@bbs.cpcn.com
2020-09-10 18:37:16 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
Several key IBM engineers quit the company and went into
business for themselves or for other computer makers.
Given the high demand for disk space, there was a market
for disk clones.
The same was true with Univac in the late '70s. A lot of sites
went to Control Data, who saw a market opening. Univac made all
sorts of threats about using CDC packs, which nonetheless worked
just fine. Since Univac couldn't meet the demand, they had only
themselves to blame (although it was fun to watch the finger-pointing
if there was a head crash).
Was that mostly in Minneapolis from the former E.R.A.
group? I think they split off to form CDC, and then
split off again to form Cray.

Watson Jr talked about his response to competition
in his memoir. He admits that he believed it
was IBM's birthright to rule the computer business
and resented the upstarts. Obviously they didn't
agree. IBM had to settle a big lawsuit against CDC.
Watson later realized that big computers were a
specialty item, not really suited for mass marketing
like IBM's product line.


While I think Univac was a 'nicer' company than IBM
in terms of corporate culture, I believe their management
made many strategic errors over the years that kept
them well behind IBM.

I'm not sure what happened to Burroughs. They were
respected for good engineering.
Charlie Gibbs
2020-09-10 23:34:38 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
Several key IBM engineers quit the company and went into
business for themselves or for other computer makers.
Given the high demand for disk space, there was a market
for disk clones.
The same was true with Univac in the late '70s. A lot of sites
went to Control Data, who saw a market opening. Univac made all
sorts of threats about using CDC packs, which nonetheless worked
just fine. Since Univac couldn't meet the demand, they had only
themselves to blame (although it was fun to watch the finger-pointing
if there was a head crash).
Was that mostly in Minneapolis from the former E.R.A.
group? I think they split off to form CDC, and then
split off again to form Cray.
Dunno. I remember noting that the name matched that of
the supercomputer manufacturer, but never figured out
whether it really was the (remnants of?) the same company.
(Consider HP. Now wipe your eyes and blow your nose.)
Post by h***@bbs.cpcn.com
Watson Jr talked about his response to competition
in his memoir. He admits that he believed it
was IBM's birthright to rule the computer business
and resented the upstarts. Obviously they didn't
agree. IBM had to settle a big lawsuit against CDC.
Watson later realized that big computers were a
specialty item, not really suited for mass marketing
like IBM's product line.
Too bad Watson's attitude is what rules most large
corporations today. :-(
Post by h***@bbs.cpcn.com
While I think Univac was a 'nicer' company than IBM
in terms of corporate culture, I believe their management
made many strategic errors over the years that kept
them well behind IBM.
I agree with the culture part. If something bad happened
Univac could escalate to get it fixed, but I suspect that
this happened less often at IBM.
Post by h***@bbs.cpcn.com
I'm not sure what happened to Burroughs. They were
respected for good engineering.
I had a friend who worked in a Burroughs shop. Their
MCP looked quite nice, if a bit high-level for my tastes.
They seemed to have a harder time keeping the hardware
running than Univac did, though.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Peter Flass
2020-09-11 01:06:01 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Post by h***@bbs.cpcn.com
Several key IBM engineers quit the company and went into
business for themselves or for other computer makers.
Given the high demand for disk space, there was a market
for disk clones.
The same was true with Univac in the late '70s. A lot of sites
went to Control Data, who saw a market opening. Univac made all
sorts of threats about using CDC packs, which nonetheless worked
just fine. Since Univac couldn't meet the demand, they had only
themselves to blame (although it was fun to watch the finger-pointing
if there was a head crash).
Was that mostly in Minneapolis from the former E.R.A.
group? I think they split off to form CDC, and then
split off again to form Cray.
Dunno. I remember noting that the name matched that of
the supercomputer manufacturer, but never figured out
whether it really was the (remnants of?) the same company.
(Consider HP. Now wipe your eyes and blow your nose.)
Post by h***@bbs.cpcn.com
Watson Jr talked about his response to competition
in his memoir. He admits that he believed it
was IBM's birthright to rule the computer business
and resented the upstarts. Obviously they didn't
agree. IBM had to settle a big lawsuit against CDC.
Watson later realized that big computers were a
specialty item, not really suited for mass marketing
like IBM's product line.
Too bad Watson's attitude is what rules most large
corporations today. :-(
Post by h***@bbs.cpcn.com
While I think Univac was a 'nicer' company than IBM
in terms of corporate culture, I believe their management
made many strategic errors over the years that kept
them well behind IBM.
I agree with the culture part. If something bad happened
Univac could escalate to get it fixed, but I suspect that
this happened less often at IBM.
Post by h***@bbs.cpcn.com
I'm not sure what happened to Burroughs. They were
respected for good engineering.
I had a friend who worked in a Burroughs shop. Their
MCP looked quite nice, if a bit high-level for my tastes.
They seemed to have a harder time keeping the hardware
running than Univac did, though.
When was this. I worked on 5500 in the late ‘60 or early ‘70s and they
seemed to have continuous crashes. They finally traced it to one of the
hard-per-track discs. FEs tried to fix it several times without success,
but Burroughs didn’t want to replace it. I think the FEs gave it a couple
of raps with a hammer. “There, it’s totally broken now, how about a
replacement.” They did, and the system was solid afterwards.
--
Pete
Charlie Gibbs
2020-09-11 20:41:43 UTC
Reply
Permalink
Post by Peter Flass
When was this.
Mid '70s, on a 17xx.
Post by Peter Flass
I worked on 5500 in the late ‘60 or early ‘70s and they
seemed to have continuous crashes. They finally traced it to one of the
hard-per-track discs. FEs tried to fix it several times without success,
but Burroughs didn’t want to replace it. I think the FEs gave it a couple
of raps with a hammer. “There, it’s totally broken now, how about a
replacement.” They did, and the system was solid afterwards.
"If it jams, force it. If it breaks, it needed replacing anyway."

I recall BAH talking about a bad circuit board that kept coming back
until someone tossed it into the pond. Sometimes that's the only
way out.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Peter Flass
2020-09-11 23:21:22 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by Peter Flass
When was this.
Mid '70s, on a 17xx.
Post by Peter Flass
I worked on 5500 in the late ‘60 or early ‘70s and they
seemed to have continuous crashes. They finally traced it to one of the
hard-per-track discs. FEs tried to fix it several times without success,
but Burroughs didn’t want to replace it. I think the FEs gave it a couple
of raps with a hammer. “There, it’s totally broken now, how about a
replacement.” They did, and the system was solid afterwards.
"If it jams, force it. If it breaks, it needed replacing anyway."
I recall BAH talking about a bad circuit board that kept coming back
until someone tossed it into the pond. Sometimes that's the only
way out.
It is if you let the bean counters run the show. If the FEs say it’s bad,
maybe try to fix it once, but the toss it. Actually, the cost to deal with
a flakey part is probably greater than the cost of the part, not even
counting the cost in customer respect.
--
Pete
Terry Kennedy
2020-09-12 00:26:09 UTC
Reply
Permalink
Post by Charlie Gibbs
I agree with the culture part. If something bad happened
Univac could escalate to get it fixed, but I suspect that
this happened less often at IBM.
FWIW, I found a reproducible microcode bug in the DEC PDP-11/44. After months
of escalation, the official answer was "too bad, so sad".

On the other hand, I had an IBM 3138 (370/138 CPU) that had developed an "im-
possible" error condition (irrecoverable channel I/O error but with sense bits
that said "nope, everything is fine here"). Within 5 days of the initial service
call, the ever-growing crowd of IBM CEs brought in a really, really odd person
who fiddled with the front panel switches, swung open one of the logic gates
and yanked a tri-lead (IBM's version of wire-wrap coax) out and said "change
this and it will be fine" and then demanded to be taken to the airport because
he had to fly to Saudi Arabia next. I basically said "Who was that masked man?"
and they told me he was one of the original designers of the 138.

But eventually IBM lost their way in customer support - we had a 9370 with all
sorts of issues from new delivery, and I refused to sign the customer acceptance
letter. For example, their answer to a user rapidly flipping the "test" switch
prominently featured on the front panel of every 3278 terminal would crash the
9370. Their official answer was to have us put up signs telling users not to do
that or the system would crash. And they wanted us to do this in student labs.
Right... Eventually IBM took the system back under an agreement where neither of
us would talk about the experience for 5 years (long passed by now).
Bob Eager
2020-09-12 09:30:26 UTC
Reply
Permalink
Post by Terry Kennedy
I agree with the culture part. If something bad happened Univac could
escalate to get it fixed, but I suspect that this happened less often
at IBM.
FWIW, I found a reproducible microcode bug in the DEC PDP-11/44. After
months of escalation, the official answer was "too bad, so sad".
I had much the same with the ICL 2960. It would do a microcode halt under
certain reproducible conditions - which could occur with a badly behaved
FORTRAN program (or a few lines of assembler, my test program).

ICL didn't want to know. I don't know if (a) they didn't believe me (b)
didn't care or (c) were incompetent in recognising the problem. I suspect
(c), and my report never got near anyone technical.

I fixed the microcode, and then modified the operating system to deal
with the additional exception I generated.

See: https://www.bobeager.uk/anecdotes.html#hwhack
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Quadibloc
2020-09-11 03:39:40 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
I'm not sure what happened to Burroughs. They were
respected for good engineering.
Supposedly, despite the name, Unisys was the result of Burroughs eating Univac
instead of Univac eating Burroughs, or at least so I've been told somewhere.

John Savard
Quadibloc
2020-09-11 03:46:16 UTC
Reply
Permalink
Post by Quadibloc
Post by h***@bbs.cpcn.com
I'm not sure what happened to Burroughs. They were
respected for good engineering.
Supposedly, despite the name, Unisys was the result of Burroughs eating Univac
instead of Univac eating Burroughs, or at least so I've been told somewhere.
Yes: according to Wikipedia, Unisys came into being in 1986, due to a merger in
which Burroughs paid $4.8 billion and bought Sperry.

So Sperry got bought out - something "happened to" it - but Burroughs lives on,
just with an additional legacy system portfolio to lean on.

John Savard
J. Clarke
2020-09-03 21:41:12 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Jon Elson
I don't think the designers of the 360 had any idea how central to system
operation disks would quickly become. They were still in the 709x
generation when they planned this all out.
Yes. The S/360 history explains how they sold more disk
drives and especially many more disk packs than anticipated.
Several key IBM engineers quit the company and went into
business for themselves or for other computer makers.
Given the high demand for disk space, there was a market
for disk clones.
I don't think IBM makes disks anymore, mainframe users
have to get them from somewhere else. Don't know why.
For certain values. They don't make the rotating devices in sealed
capsules. They do make storage systems though, using drives purchased
from drive manufacturers.

I suspect that they just couldn't compete in the storage market
anymore, especially after the Deathstar flushed their reputation for
reliability down the toilet. Disks are commodity items these days.
They start in the market at around 500 bucks and drop out of it around
80.
h***@bbs.cpcn.com
2020-09-01 18:00:33 UTC
Reply
Permalink
Post by John Levine
Post by Dan Espen
Main Storage consists of 4,096; 8,192; 12,288; and
16,384 positions of magnetic core storage.
I'm guessing the 32K is accurate and was implemented in later models.
Models 1,2,3,4 were limited to 16K, model 5 which was different
internally and somewhat faster could go to 32. I never saw a model 5.
Post by Dan Espen
It's interesting that the 2311 disk which was also sold for other S/360
models has fixed sector sizes on the Model 20.
I always thought the variable block sizes on S/360 was a mistake.
It put too much complexity into user space. A pity IBM didn't sectorize
all of it's disk on all of it's models from the beginning.
I think it was all part of the expensive memory mindset when the 360
was designed. With CKD disks, the ISAM in-memory index only needed one
entry per track, or even one per cylinder and the channel program took
care of finding the right individual record. With the 20's disks, the
track index told it which track a record was on, but it had to read
each sector to find the right one, taking a 25ms disk revolution each
time.
Veterans of the 1401 told me they felt the CKD arrangement
was superior to that of the fixed sectors of the 1401.
More efficient use of scarce space.

Well into the 390 era disk space was expensive and
allocations had to be carefully considered. Today,
of course, disk is cheap and we can be sloppy.
Quadibloc
2020-09-02 00:05:58 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Well into the 390 era disk space was expensive and
allocations had to be carefully considered. Today,
of course, disk is cheap and we can be sloppy.
I worked with a Honeywell 316 which had a 1311-like disk drive, so I well
remember that if you had three sectors on a track, they were each 443 bytes
long. The idea being not to waste any space - so the sector size varied
depending on how you formatted the disk, and the fewer sectors you had, the less
overhead there was in multiple headers and gaps.

It seems amazing to me that even today it's too much trouble for the disk
drivers and operating systems to support sectors with odd numbers of bytes.

John Savard
J. Clarke
2020-09-02 00:25:01 UTC
Reply
Permalink
Post by Quadibloc
Post by h***@bbs.cpcn.com
Well into the 390 era disk space was expensive and
allocations had to be carefully considered. Today,
of course, disk is cheap and we can be sloppy.
I worked with a Honeywell 316 which had a 1311-like disk drive, so I well
remember that if you had three sectors on a track, they were each 443 bytes
long. The idea being not to waste any space - so the sector size varied
depending on how you formatted the disk, and the fewer sectors you had, the less
overhead there was in multiple headers and gaps.
It seems amazing to me that even today it's too much trouble for the disk
drivers and operating systems to support sectors with odd numbers of bytes.
You can't buy disk drives with odd numbers of bytes so what's the
point?

A disk at this point is a black box that you plug a cable into and
give commands and receive responses. The operating system neither
knows nor cares what the internal organization of that disk might be.
J. Clarke
2020-09-11 04:30:33 UTC
Reply
Permalink
On Thu, 10 Sep 2020 20:44:03 -0700 (PDT), Quadibloc
Post by J. Clarke
Post by Quadibloc
It seems amazing to me that even today it's too much trouble for the disk
drivers and operating systems to support sectors with odd numbers of bytes.
You can't buy disk drives with odd numbers of bytes so what's the
point?
I had sort of presumed they were being made that way because that's what the
computers wanted rather than the other way around.
The computers were fine with 512 byte sectors. The 4K sectors were
something the drive manufacturers wanted IIUC. There was a transition
period where the drives had 4K sectors but reported 512 byte. And
now, well, what happens inside the capsule comes under the heading of
arcane knowledge accessible only to the high priests and acolytes (or
anybody who cares enough to reverse engineer the code in the
controller on the drive).
John Savard
Peter Flass
2020-09-11 13:48:47 UTC
Reply
Permalink
Post by J. Clarke
On Thu, 10 Sep 2020 20:44:03 -0700 (PDT), Quadibloc
Post by J. Clarke
Post by Quadibloc
It seems amazing to me that even today it's too much trouble for the disk
drivers and operating systems to support sectors with odd numbers of bytes.
You can't buy disk drives with odd numbers of bytes so what's the
point?
I had sort of presumed they were being made that way because that's what the
computers wanted rather than the other way around.
The computers were fine with 512 byte sectors. The 4K sectors were
something the drive manufacturers wanted IIUC. There was a transition
period where the drives had 4K sectors but reported 512 byte. And
now, well, what happens inside the capsule comes under the heading of
arcane knowledge accessible only to the high priests and acolytes (or
anybody who cares enough to reverse engineer the code in the
controller on the drive).
John Savard
However you’re keeping track of free space might have a problem with small
sectors on large disks, unless you use extent allocation. You would need a
bitmap or sone kind of list/tree to keep track of free space (I am not a
lawyer, so I can’t speak definitively), and I think the overhead would kill
you.
--
Pete
Dennis Boone
2020-09-11 16:02:14 UTC
Reply
Permalink
Post by J. Clarke
The computers were fine with 512 byte sectors. The 4K sectors were
something the drive manufacturers wanted IIUC. There was a transition
period where the drives had 4K sectors but reported 512 byte.
Larger sectors and larger memory pages are related. The accounting for
multiple transfers to disk is more expensive than larger single
transfers. Page tables are smaller if pages are larger. It's more
efficient at the host level to use larger pages, and for the disk sector
size to match the host page size. 370 family has done larger pages for
years. There was research at Berkeley on the VAX in the early 80s I
think that indicated a 2k page size was more efficient even on hardware
of that era. Prime used ~2k pages for over half of its life. (Yes,
there are huge pages on some systems now which don't match disk sector
size.) Presumably the optimal page size depends on the ever(?)
increasing hardware performance.

Of course, some systems weren't ready for 4kn drives when modern SATA /
SAS drives started shipping with 4k sectors, so they had to offer the
option of logical 512 sectors. The various possible default/available
configurations are reasonably obvious.

I'm sure it also helps the internal accounting on the drive to have a
smaller number of larger sectors. But I don't think this was something
just the drive mfgs instigated.

De
Peter Flass
2020-09-11 18:18:02 UTC
Reply
Permalink
Post by Dennis Boone
Post by J. Clarke
The computers were fine with 512 byte sectors. The 4K sectors were
something the drive manufacturers wanted IIUC. There was a transition
period where the drives had 4K sectors but reported 512 byte.
Larger sectors and larger memory pages are related. The accounting for
multiple transfers to disk is more expensive than larger single
transfers. Page tables are smaller if pages are larger. It's more
efficient at the host level to use larger pages, and for the disk sector
size to match the host page size. 370 family has done larger pages for
years. There was research at Berkeley on the VAX in the early 80s I
think that indicated a 2k page size was more efficient even on hardware
of that era. Prime used ~2k pages for over half of its life. (Yes,
there are huge pages on some systems now which don't match disk sector
size.) Presumably the optimal page size depends on the ever(?)
increasing hardware performance.
Of course, some systems weren't ready for 4kn drives when modern SATA /
SAS drives started shipping with 4k sectors, so they had to offer the
option of logical 512 sectors. The various possible default/available
configurations are reasonably obvious.
I'm sure it also helps the internal accounting on the drive to have a
smaller number of larger sectors. But I don't think this was something
just the drive mfgs instigated.
Thinking about it, though, there’s no reason the system can’t allocate disk
in chunks larger than a sector size. It might make sense to allocate in 4k
chunks. Then it only has to keep track of allocated/free chunks, rather
than sectors. It’s a simple multiply to convert the chunk number to the
starting sector number.
--
Pete
Charlie Gibbs
2020-09-02 00:50:45 UTC
Reply
Permalink
On Tuesday, September 1, 2020 at 12:00:35 PM UTC-6,
Post by h***@bbs.cpcn.com
Well into the 390 era disk space was expensive and
allocations had to be carefully considered. Today,
of course, disk is cheap and we can be sloppy.
I worked with a Honeywell 316 which had a 1311-like disk drive, so I well
remember that if you had three sectors on a track, they were each 443 bytes
long. The idea being not to waste any space - so the sector size varied
depending on how you formatted the disk, and the fewer sectors you had,
the less overhead there was in multiple headers and gaps.
Yes, I remember those days. I wrote a little utility that would take
a record size and calculate optimum blocking factors for a CKD disk.
It seems amazing to me that even today it's too much trouble for the disk
drivers and operating systems to support sectors with odd numbers of bytes.
Considering how cheap and plentiful disk space is these days,
it doesn't take much trouble to be too much.
--
/~\ Charlie Gibbs | Microsoft is a dictatorship.
\ / <***@kltpzyxm.invalid> | Apple is a cult.
X I'm really at ac.dekanfrus | Linux is anarchy.
/ \ if you read it the right way. | Pick your poison.
Jon Elson
2020-09-03 15:51:03 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Veterans of the 1401 told me they felt the CKD arrangement
was superior to that of the fixed sectors of the 1401.
More efficient use of scarce space.
Well into the 390 era disk space was expensive and
allocations had to be carefully considered. Today,
of course, disk is cheap and we can be sloppy.
With careful design, you could make efficient use of the disk, as you made
the records just the right size, with no wasted space at the end. But, with
poor thought, you could also waste a huge amount of space by choosing a size
that left almost a whole record unusable at the end of each track. And,
when your installation moved to a new disk drive model, this could require
program redesign to accomodate the new track capacity.

But, I think there was another factor. By allocating entire tracks and
cylinders for data sets, you left unused capacity at the end of each one.
Also, it was easy to get a fragmentation condition where enough contiguous
cylinders were not available to create a data set. I think this actually
wasted quite a bit of disk space that could have been made use of in a fixed
block size, random allocation file system.

Jon
Loading...