Discussion:
IBM Programmer Aptitude Test
Add Reply
f***@dfwair.net
2014-07-19 17:31:29 UTC
Reply
Permalink
I took this test in 1962 when I was a senior in high school. I did well enough to get two summer job offers - one from IBM and one from the Ford Scientific Research Laboratory. I took the job with Ford because it involved programming for real applications (the IBM job involved being an assistant to a man who repaired accounting machines). The test evaluated my ability to think in a logical manner and solve puzzles. While certainly not comprehensive by today's standards, it did work fairly well from my perspective. I ended up with a 40+ year career in software development.
Reading an old (1971) journal article, I saw a reference to the IBM Programmer
McNamara, W. J., & Hughes, J. L.
Manual for the revised programmer aptitude test.
White Plains, New York: IBM, 1969.
completion of number sequences, geometric paired comparisons, and word
problems similar to those in junior high school mathematics.
1. Has anyone out there actually taken this test?
2. How would I go about getting a copy?
Thanks in advance.
--
Paul Palmer
Kidder Hall 368
Oregon State University, Corvallis, Oregon 97331-4605
Anne & Lynn Wheeler
2014-07-19 19:09:48 UTC
Reply
Permalink
Post by f***@dfwair.net
I took this test in 1962 when I was a senior in high school. I did
well enough to get two summer job offers - one from IBM and one from
the Ford Scientific Research Laboratory. I took the job with Ford
because it involved programming for real applications (the IBM job
involved being an assistant to a man who repaired accounting
machines). The test evaluated my ability to think in a logical manner
and solve puzzles. While certainly not comprehensive by today's
standards, it did work fairly well from my perspective. I ended up
with a 40+ year career in software development.
I went to recruitment day and took the IBM programmer aptitude test just
before I graduated. The IBMer said that I didn't get high enuf score to
be offerred a job. I then explained that I had already been working as
primary ibm operating system support at the univ, had been brought in to
help setup boeing computer services as full-time employee ... and had to
chose between staying with boeing or accepting job offer with the ibm
cambridge science center (at "staff" level ... skipping beginning,
associate and the other lower levels). He couldn't reconcile my
score and the job offer from the science center (course it didn't make
any difference). posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech

Recent posts that major IBM products had been original developed at
customer or internal datacenters and then moved to a (software)
"development group" for support and maintenance ... the transition to
"object code only" in the 80s ... greatly curtailed much of that
innovation:
http://www.garlic.com/~lynn/2014.html#19 the suckage of MS-DOS, was Re: 'Free Unix!
http://www.garlic.com/~lynn/2014h.html#74 The Tragedy of Rapid Evolution?
http://www.garlic.com/~lynn/2014h.html#79 EBFAS
http://www.garlic.com/~lynn/2014h.html#80 The Tragedy of Rapid Evolution?
http://www.garlic.com/~lynn/2014h.html#99 TSO Test does not support 65-bit debugging?
http://www.garlic.com/~lynn/2014i.html#5 "F[R]eebie" software
http://www.garlic.com/~lynn/2014i.html#6 TSO Test does not support 65-bit debugging?
http://www.garlic.com/~lynn/2014i.html#7 You can make your workplace 'happy'

other recent refs:
http://www.garlic.com/~lynn/2014c.html#31 How many EBCDIC machines are still around?
http://www.garlic.com/~lynn/2014e.html#19 The IBM Strategy
http://www.garlic.com/~lynn/2014e.html#23 Is there any MF shop using AWS service?
http://www.garlic.com/~lynn/2014e.html#69 Before the Internet: The golden age of online services
http://www.garlic.com/~lynn/2014f.html#36 IBM Historic computing
http://www.garlic.com/~lynn/2014f.html#73 Is end of mainframe near ?
http://www.garlic.com/~lynn/2014g.html#62 Interesting and somewhat disturbing article about IBM in BusinessWeek. What is your opinion?
http://www.garlic.com/~lynn/2014g.html#63 Costs of core
http://www.garlic.com/~lynn/2014i.html#13 IBM & Boyd
http://www.garlic.com/~lynn/2014i.html#31 Speed of computers--wave equation for the copper atom? (curiosity)
--
virtualization experience starting Jan1968, online at home since Mar1970
jmfbahciv
2014-07-20 13:07:54 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Post by f***@dfwair.net
I took this test in 1962 when I was a senior in high school. I did
well enough to get two summer job offers - one from IBM and one from
the Ford Scientific Research Laboratory. I took the job with Ford
because it involved programming for real applications (the IBM job
involved being an assistant to a man who repaired accounting
machines). The test evaluated my ability to think in a logical manner
and solve puzzles. While certainly not comprehensive by today's
standards, it did work fairly well from my perspective. I ended up
with a 40+ year career in software development.
I went to recruitment day and took the IBM programmer aptitude test just
before I graduated. The IBMer said that I didn't get high enuf score to
be offerred a job. I then explained that I had already been working as
primary ibm operating system support at the univ, had been brought in to
help setup boeing computer services as full-time employee ... and had to
chose between staying with boeing or accepting job offer with the ibm
cambridge science center (at "staff" level ... skipping beginning,
associate and the other lower levels). He couldn't reconcile my
score and the job offer from the science center (course it didn't make
any difference). posts mentioning science center
http://www.garlic.com/~lynn/subtopic.html#545tech
Did you ever find out what kinds of questions caused the low score?

<snip>

/BAH
Anne & Lynn Wheeler
2014-07-20 13:52:25 UTC
Reply
Permalink
Post by jmfbahciv
Did you ever find out what kinds of questions caused the low score?
re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test

never did, but the IBMer doing interview was incredulous when I told
him that I already had offer from science center
http://www.garlic.com/~lynn/subtopic.html#545tech

was fairly senior position ... and not entry level. possibly conjecture
was the test was oriented to finding those that fit the "Man Month"
profile:
http://en.wikipedia.org/wiki/The_Mythical_Man-Month

recent refs:
http://www.garlic.com/~lynn/2014g.html#99 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
http://www.garlic.com/~lynn/2014h.html#103 TSO Test does not support 65-bit debugging?
http://www.garlic.com/~lynn/2014i.html#5 "F[R]eebie" software
http://www.garlic.com/~lynn/2014i.html#41 How Comp-Sci went from passing fad to must have major

I have mentioned in the past being blamed for online computer
conferencing on the internal network in the late 70s & early 80s
(folklore is that when the executive committee was told about online
computer conferencing & the internal network, 5of6 wanted to fire me).
internal network posts
http://www.garlic.com/~lynn/subnetwork.html#internalnet

I've also mentioned that somewhat as result of online computer
conferencing, a researcher was assigned to study how I communicated.
They sat in the back of my office for 9months, took notes on how I
communicated, face-to-face, telephone, went with me to meetings and got
copies of all my incoming & outgoing email as well as logs of all my
instant messages (almost tempted to reference gov. evesdropping). The
result was a number of papers and at least one book as well as stanford
PHD (joint with language and computer AI, winograd was advisor on
computer AI side). some past posts
http://www.garlic.com/~lynn/subnetwork.html#cmc

the researcher had previously spent some time as English as Second Language
instructor, and once commented that my use of English was
characteristic of non-native speaker.
--
virtualization experience starting Jan1968, online at home since Mar1970
jmfbahciv
2014-07-21 12:52:38 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Post by jmfbahciv
Did you ever find out what kinds of questions caused the low score?
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
never did, but the IBMer doing interview was incredulous when I told
him that I already had offer from science center
http://www.garlic.com/~lynn/subtopic.html#545tech
was fairly senior position ... and not entry level. possibly conjecture
was the test was oriented to finding those that fit the "Man Month"
http://en.wikipedia.org/wiki/The_Mythical_Man-Month
There were psych questions in the test?
Post by Anne & Lynn Wheeler
http://www.garlic.com/~lynn/2014g.html#99 IBM architecture, was Fifty Years
of nitpicking definitions, was BASIC,theProgrammingLanguageT
Post by Anne & Lynn Wheeler
http://www.garlic.com/~lynn/2014h.html#103 TSO Test does not support 65-bit debugging?
http://www.garlic.com/~lynn/2014i.html#5 "F[R]eebie" software
http://www.garlic.com/~lynn/2014i.html#41 How Comp-Sci went from passing fad
to must have major
Post by Anne & Lynn Wheeler
I have mentioned in the past being blamed for online computer
conferencing on the internal network in the late 70s & early 80s
(folklore is that when the executive committee was told about online
computer conferencing & the internal network, 5of6 wanted to fire me).
internal network posts
http://www.garlic.com/~lynn/subnetwork.html#internalnet
Yea. NOthing like innovation to scare them.
Post by Anne & Lynn Wheeler
I've also mentioned that somewhat as result of online computer
conferencing, a researcher was assigned to study how I communicated.
They sat in the back of my office for 9months, took notes on how I
communicated, face-to-face, telephone, went with me to meetings and got
copies of all my incoming & outgoing email as well as logs of all my
instant messages (almost tempted to reference gov. evesdropping). The
result was a number of papers and at least one book as well as stanford
PHD (joint with language and computer AI, winograd was advisor on
computer AI side). some past posts
http://www.garlic.com/~lynn/subnetwork.html#cmc
the researcher had previously spent some time as English as Second Language
instructor, and once commented that my use of English was
characteristic of non-native speaker.
Was the area where you grew up French-based?

/BAH
Anne & Lynn Wheeler
2014-07-21 13:29:31 UTC
Reply
Permalink
Post by jmfbahciv
Was the area where you grew up French-based?
re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test

nope, but my mother says I was almost 3 before I talked
--
virtualization experience starting Jan1968, online at home since Mar1970
g***@mail.com
2014-07-21 15:55:03 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Post by jmfbahciv
Was the area where you grew up French-based?
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
nope, but my mother says I was almost 3 before I talked
And then never shut up: :)

I think of the young non-working husbands who I see in the supermarkets,
little girl standing in the basket, constant chatter, usually prefixed
by "Mammy buys that" ('That' is usually sweet, sticky, and most unlikely
to be bought by Mammy)
--
maus
.
.
...
greenaum
2014-07-21 14:33:18 UTC
Reply
Permalink
On Sun, 20 Jul 2014 09:52:25 -0400, Anne & Lynn Wheeler
Post by Anne & Lynn Wheeler
I've also mentioned that somewhat as result of online computer
conferencing, a researcher was assigned to study how I communicated.
They sat in the back of my office for 9months, took notes on how I
communicated, face-to-face, telephone, went with me to meetings and got
copies of all my incoming & outgoing email as well as logs of all my
instant messages (almost tempted to reference gov. evesdropping). The
result was a number of papers and at least one book as well as stanford
PHD
Yup, I remember when that came up, mentioning your unusual brain.
Fortunately the right kind of unusual! It's believed by some that Lynn
was born with a paper-tape reader on the back of his head.

---------------------------------------------------------------------

"hey let's educate the brutes, we know we are superior to them anyway,
just through genetics, we are gentically superior to the working
class. They are a shaved monkey. If we educate them, they will be able
to read instructions, turn up on time and man the conveyor belts,
sorted." #
Walter Bushell
2014-07-21 19:40:37 UTC
Reply
Permalink
Post by greenaum
On Sun, 20 Jul 2014 09:52:25 -0400, Anne & Lynn Wheeler
Post by Anne & Lynn Wheeler
I've also mentioned that somewhat as result of online computer
conferencing, a researcher was assigned to study how I communicated.
They sat in the back of my office for 9months, took notes on how I
communicated, face-to-face, telephone, went with me to meetings and got
copies of all my incoming & outgoing email as well as logs of all my
instant messages (almost tempted to reference gov. evesdropping). The
result was a number of papers and at least one book as well as stanford
PHD
Yup, I remember when that came up, mentioning your unusual brain.
Fortunately the right kind of unusual! It's believed by some that Lynn
was born with a paper-tape reader on the back of his head.
I'm sure he's upgraded to a micro SD by now, or perhaps a wifi link.
Dan Espen
2014-07-20 15:28:50 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Post by f***@dfwair.net
I took this test in 1962 when I was a senior in high school. I did
well enough to get two summer job offers - one from IBM and one from
the Ford Scientific Research Laboratory. I took the job with Ford
because it involved programming for real applications (the IBM job
involved being an assistant to a man who repaired accounting
machines). The test evaluated my ability to think in a logical manner
and solve puzzles. While certainly not comprehensive by today's
standards, it did work fairly well from my perspective. I ended up
with a 40+ year career in software development.
I went to recruitment day and took the IBM programmer aptitude test just
before I graduated. The IBMer said that I didn't get high enuf score to
be offerred a job.
I went to a technical school to learn programming.
I did well in the school, earning an A and acing all the tests.

Then my employer at the time sent me to HR to take the aptitude
test. They told me I didn't pass but asked me to take it again.
Then they told me I didn't pass again, but offered me a programming
position anyway.

The thing is, I usually do well on those types of tests.

Years later it occurred to me, maybe they were lying.
Maybe I did do well on the test. Maybe too well, leading
them to the second test.

I'll never know. Either the test completely failed to
measure my ability, or maybe they were just having their way
with me.
--
Dan Espen
Anne & Lynn Wheeler
2014-07-20 15:47:56 UTC
Reply
Permalink
Post by Dan Espen
I went to a technical school to learn programming.
I did well in the school, earning an A and acing all the tests.
Then my employer at the time sent me to HR to take the aptitude
test. They told me I didn't pass but asked me to take it again.
Then they told me I didn't pass again, but offered me a programming
position anyway.
The thing is, I usually do well on those types of tests.
Years later it occurred to me, maybe they were lying.
Maybe I did do well on the test. Maybe too well, leading
them to the second test.
I'll never know. Either the test completely failed to
measure my ability, or maybe they were just having their way
with me.
re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test

shortly after joining IBM ... I guess I started to be a problem ...
during the "Future System" period, I refused to work on FS, continued
to work on 370 and would even periodically ridicule FS
http://www.garlic.com/~lynn/submain.html#futuresys

there were a few similar instances even before getting blamed for online
computer conferencing. about the same time as the online computer
conferencing flap ... I wrote an open door claiming that I was vastly
underpaid, even including references. I got back written response from
head of HR that my complete employment history had been reviewed and I
was making exactly what I was suppose to. I then took my original and
their response and wrote a response that I had been asked to interview
new hires for a new group that would work under my technical direction
and HR was making the new hires offers that were 30% more than I was
currently making. I never got a response from HR ... but within few
weeks, I got a 30% raise ... aka it wasn't a 30% raise to put me at my
correct salary level, it was 30% raise to bring me up level with what
they were offering the new hires. past refs:
http://www.garlic.com/~lynn/2009h.html#74 My Vintage Dream PC
http://www.garlic.com/~lynn/2010c.html#82 search engine history, was Happy DEC-10 Day
http://www.garlic.com/~lynn/2010f.html#79 The 2010 Census
http://www.garlic.com/~lynn/2010m.html#66 Win 3.11 on Broadband
http://www.garlic.com/~lynn/2011f.html#0 coax (3174) throughput
http://www.garlic.com/~lynn/2011g.html#2 WHAT WAS THE PROJECT YOU WERE INVOLVED/PARTICIPATED AT IBM THAT YOU WILL ALWAYS REMEMBER?
http://www.garlic.com/~lynn/2011g.html#12 Clone Processors
http://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
http://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
http://www.garlic.com/~lynn/2014c.html#65 IBM layoffs strike first in India; workers describe cuts as 'slaughter' and 'massive'
http://www.garlic.com/~lynn/2014h.html#81 The Tragedy of Rapid Evolution?

periodically during my career, people would remind me that "business
ethics" was an *oxymoron*.

other past posts referencing being told that "business ethics" is an
*oxymoron*
http://www.garlic.com/~lynn/2007j.html#72 IBM Unionization
http://www.garlic.com/~lynn/2009.html#53 CROOKS and NANNIES: what would Boyd do?
http://www.garlic.com/~lynn/2009e.html#37 How do you see ethics playing a role in your organizations current or past?
http://www.garlic.com/~lynn/2009o.html#36 U.S. students behind in math, science, analysis says
http://www.garlic.com/~lynn/2009o.html#52 Revisiting CHARACTER and BUSINESS ETHICS
http://www.garlic.com/~lynn/2009o.html#57 U.S. begins inquiry of IBM in mainframe market
http://www.garlic.com/~lynn/2009r.html#50 "Portable" data centers
http://www.garlic.com/~lynn/2010b.html#38 Happy DEC-10 Day
http://www.garlic.com/~lynn/2010f.html#20 Would you fight?
http://www.garlic.com/~lynn/2010g.html#0 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2010g.html#44 16:32 far pointers in OpenWatcom C/C++
http://www.garlic.com/~lynn/2011b.html#59 Productivity And Bubbles
http://www.garlic.com/~lynn/2012k.html#28 How to Stuff a Wild Duck
http://www.garlic.com/~lynn/2012k.html#42 The IBM "Open Door" policy
--
virtualization experience starting Jan1968, online at home since Mar1970
Walter Bushell
2014-07-20 16:15:51 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
there were a few similar instances even before getting blamed for online
computer conferencing. about the same time as the online computer
conferencing flap ... I wrote an open door claiming that I was vastly
underpaid, even including references. I got back written response from
head of HR that my complete employment history had been reviewed and I
was making exactly what I was suppose to. I then took my original and
their response and wrote a response that I had been asked to interview
new hires for a new group that would work under my technical direction
and HR was making the new hires offers that were 30% more than I was
currently making. I never got a response from HR ... but within few
weeks, I got a 30% raise ... aka it wasn't a 30% raise to put me at my
correct salary level, it was 30% raise to bring me up level with what
This happens all the time. In fields where pay is going up rapidly or
periods of high inflation, companies will frequently pay new hires
more than the people hired last year or the year before etcetera.

They figure they can (usually) get away with small raises for their
current employees, but have to meet the market for newbies.

WHAT? You were expecting justice? From a corporation?
Anne & Lynn Wheeler
2014-07-20 16:27:38 UTC
Reply
Permalink
Post by Walter Bushell
This happens all the time. In fields where pay is going up rapidly or
periods of high inflation, companies will frequently pay new hires
more than the people hired last year or the year before etcetera.
They figure they can (usually) get away with small raises for their
current employees, but have to meet the market for newbies.
WHAT? You were expecting justice? From a corporation?
re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test

I had included in the original open door a copy of (then recent) SJMN
series on pay in silicon valley ... basically job hopping played
significant component ... if you had been with the same company more
than 2yrs, you were underpaid ... but it didn't have case where nearly
20yrs in the business, was making 30% less than new hire offers (much
more egregious than any of the examples).

however, recently in news has been several silicon valley companies
convicted for salary fixing and aggreements to not poach each others
workers.

Google and Apple Settle Lawsuit Alleging Wage-Fixing
http://time.com/76655/google-apple-settle-wage-fixing-lawsuit/
pple, Google Settle Wage-Fixing and Hiring Conspiracy Case
http://www.vanityfair.com/online/daily/2014/04/apple-google-settle-wage-fixing-hiring-case
Tech giants settle wage-fixing allegations for a reported $324M
http://nypost.com/2014/04/24/tech-giants-settle-wage-fixing-allegations-for-a-reported-324m/
Fixing a Salary Negotiation Mistake Before the Job Offer
http://www.salary.com/advice/layouthtmls/advl_display_Cat8_Ser202_Par304.html
Apple, others officially agree to $325M settlement in Silicon Valley
wage fixing case
http://appleinsider.com/articles/14/05/23/apple-others-officially-agree-to-325m-settlement-in-silicon-valley-wage-fixing-case
Pixar, LucasFilm, DreamWorks Animation In Alleged Wage-Fixing Cartel
To Boost Profit
http://nikkifinke.com/pixar-lucasfilm-dreamworks-animation-wage-fixing-conspiracy/
Tech giants lose round in wage-fixing suit
http://www.cnet.com/news/judge-denies-request-for-summary-judgment-in-tech-firm-wage-suit/
--
virtualization experience starting Jan1968, online at home since Mar1970
Dan Espen
2014-07-20 23:16:43 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Post by Dan Espen
I went to a technical school to learn programming.
I did well in the school, earning an A and acing all the tests.
Then my employer at the time sent me to HR to take the aptitude
test. They told me I didn't pass but asked me to take it again.
Then they told me I didn't pass again, but offered me a programming
position anyway.
The thing is, I usually do well on those types of tests.
Years later it occurred to me, maybe they were lying.
Maybe I did do well on the test. Maybe too well, leading
them to the second test.
I'll never know. Either the test completely failed to
measure my ability, or maybe they were just having their way
with me.
shortly after joining IBM ... I guess I started to be a problem ...
during the "Future System" period, I refused to work on FS, continued
to work on 370 and would even periodically ridicule FS
You weren't the problem. IBM management was the problem.
As history has proved, you were right all along.
Why you hung around was the mystery.
You had more sense than the fools that surrounded you.
Post by Anne & Lynn Wheeler
there were a few similar instances even before getting blamed for online
computer conferencing. about the same time as the online computer
conferencing flap
Hey, I got "blamed" for lots of things.
But I didn't quite look at it that way, especially
if whatever I did got used. So much work gets done
in spite of management, I look at it as just the way things
are done.
Post by Anne & Lynn Wheeler
... I wrote an open door claiming that I was vastly
underpaid, even including references. I got back written response from
head of HR that my complete employment history had been reviewed and I
was making exactly what I was suppose to. I then took my original and
their response and wrote a response that I had been asked to interview
new hires for a new group that would work under my technical direction
and HR was making the new hires offers that were 30% more than I was
currently making. I never got a response from HR ... but within few
weeks, I got a 30% raise ... aka it wasn't a 30% raise to put me at my
correct salary level, it was 30% raise to bring me up level with what
Interesting.
Seems like they really did appreciate your skills.
But still I would have taken the raise and parlayed it into another
raise. The one that comes with a job change.

As a consultant, I once saved the client so much money
they called up my company and asked them to come down and they
told them what I'd done, and how much they appreciated it.

Real nice. Got me a 40% raise.

Unfortunately, the client's rate went up the same amount.
They held on to me, anyway.
I only left that account because I finished everything they
could think of.

A year later they finally figured out something new, so I went
back for another 6 months. One of my best assignments
on one of my favorite machines, the IBM System/34.

I and the team I worked with ran rings around the mainframe
they used in headquarters.
Post by Anne & Lynn Wheeler
periodically during my career, people would remind me that "business
ethics" was an *oxymoron*.
Maybe so, but as an employee, and especially when I was consulting
I considered absolute honesty to be imperative.
And the consulting company I worked for backed me up every time.
--
Dan Espen
Anne & Lynn Wheeler
2014-07-21 00:43:49 UTC
Reply
Permalink
Post by Dan Espen
Hey, I got "blamed" for lots of things.
But I didn't quite look at it that way, especially
if whatever I did got used. So much work gets done
in spite of management, I look at it as just the way things
are done.
re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test

one of the customers that i would drop in on (and got to know pretty
well, sit around and kabitz with datacenter manager) had enormously huge
football field of ibm mainframes ... maybe not like renton or spook base
... but still pretty large. The local IBM branch manager had horribly
offended the customer ... and as "revenge" they were going to be the
first commercial true blue customer to install a clone processor (this
vendor had been selling into education & scientific market ... but had
yet to break into the commercial market).

I got asked to spend 6months on site at the customer account. The claim
was the branch manager was good sailing buddy of the CEO ... and when
the customer is the first commercial account to install clone processor
... it would ruin the branch manager's career. I was suppose to be there
for six months to make it look like it was a technical issue
(distracting any reflection on the branch manager) ... however i knew
from the customer that there wasn't going to be anything that stops them
from istalling the processor from clone vendor (although it would be the
only one in a vast sea of true blue machines). I was told that if I
didn't do it, I could kiss goodby to any career in the company.

One of the reasons i stayed was there was more toys than anywhere else
in the world. One of my hobbies was doing enhanced production operating
systems for internal datacenters ... and i could walk into almost any
corporate interal datacenter in the world and be allowed to play. I also
got to play disk engineer in bldgs. 14&15 ... or dozens of other things
... all below top executive radar.

past posts getting to play disk engineer
http://www.garlic.com/~lynn/sutopic.html#disk
one of my long time internal operating system customers was the world
wide online sales&marketing system system HONE ... some past posts
http://www.garlic.com/~lynn/subtopic.html#hone

FS was suppose to completely replace 370 ... and internal politics was
killing off 370 efforts ... which is credited with giving clone vendors
market foothold
http://www.garlic.com/~lynn/submain.html#futuresys

in the wake of FS failure, there was mad rush to get products back into
pipeline ... 3033 (168 remapped to 20% faster chips) and 3081 were
kicked off in parallel. A couple of us got the 3033 processor engineers
to work on a 16-way design in their spare time. Everybody in high-end
mainframe land (POK) thot it was really great until somebody told the
head of POK it could be decades before the POK favorite son operating
system had 16-way support. Then we were asked to never visit POK again
and the processor engineers were instructed to never get distracted
again. However, I could still sneak into POK and go bike riding with
processor engineers.

recent posts mentioning renton datacenter at that time i was there
had upwards of $300M ibm mainframe equipment ...
http://www.garlic.com/~lynn/2014c.html#31 How many EBCDIC machines are still around?
http://www.garlic.com/~lynn/2014e.html#8 The IBM Strategy
http://www.garlic.com/~lynn/2014e.html#9 Boyd for Business & Innovation Conference
http://www.garlic.com/~lynn/2014e.html#19 The IBM Strategy
http://www.garlic.com/~lynn/2014e.html#23 Is there any MF shop using AWS service?
http://www.garlic.com/~lynn/2014f.html#36 IBM Historic computing
http://www.garlic.com/~lynn/2014f.html#73 Is end of mainframe near ?
http://www.garlic.com/~lynn/2014f.html#80 IBM Sales Fall Again, Pressuring Rometty's Profit Goal
http://www.garlic.com/~lynn/2014g.html#57 Interesting and somewhat disturbing article about IBM in BusinessWeek. What is your opinion?
http://www.garlic.com/~lynn/2014g.html#63 Costs of core
http://www.garlic.com/~lynn/2014i.html#13 IBM & Boyd

past posts mentioning the branch manager that horribly
offended one of his customers:
http://www.garlic.com/~lynn/2005f.html#63 Moving assembler programs above the line
http://www.garlic.com/~lynn/2007b.html#32 IBMLink 2000 Finding ESO levels
http://www.garlic.com/~lynn/2011c.html#19 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011c.html#28 If IBM Hadn't Bet the Company
http://www.garlic.com/~lynn/2011d.html#12 I actually miss working at IBM
http://www.garlic.com/~lynn/2011l.html#19 Selectric Typewriter--50th Anniversary
http://www.garlic.com/~lynn/2011m.html#31 computer bootlaces
http://www.garlic.com/~lynn/2011m.html#61 JCL CROSS-REFERENCE Utilities (OT for Paul, Rick, and Shmuel)
http://www.garlic.com/~lynn/2012f.html#21 Word Length
http://www.garlic.com/~lynn/2012k.html#8 International Business Marionette
http://www.garlic.com/~lynn/2013e.html#10 The Knowledge Economy Two Classes of Workers
http://www.garlic.com/~lynn/2013l.html#22 Teletypewriter Model 33

misc. recent posts mentioning 16-way
http://www.garlic.com/~lynn/2013h.html#6 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013h.html#14 The cloud is killing traditional hardware and software
http://www.garlic.com/~lynn/2013m.html#70 architectures, was Open source software
http://www.garlic.com/~lynn/2013n.html#19 z/OS is antique WAS: Aging Sysprogs = Aging Farmers
http://www.garlic.com/~lynn/2013n.html#59 'Free Unix!': The world-changing proclamation made30yearsagotoday
http://www.garlic.com/~lynn/2014d.html#59 Difference between MVS and z / OS systems
http://www.garlic.com/~lynn/2014e.html#11 Can the mainframe remain relevant in the cloud and mobile era?
http://www.garlic.com/~lynn/2014f.html#21 Complete 360 and 370 systems found
http://www.garlic.com/~lynn/2014h.html#6 Demonstrating Moore's law
--
virtualization experience starting Jan1968, online at home since Mar1970
Anne & Lynn Wheeler
2014-07-21 13:17:05 UTC
Reply
Permalink
re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test

other archaeological tales

not long after graduating and joining the science center ... other
recent ref
http://www.garlic.com/~lynn/2014i.html#13 IBM & Boyd
http://www.garlic.com/~lynn/2014i.html#31 Speed of computers--wave equation for the copper atom? (curiosity)

the company hired a new CSO ... as was common in the period commercial
CSO coming from gov. service, specializing in physical security (in this
case head of presidential detail). even tho I had relatively recently
started with company ... was considered one of the most knowledgeable on
computer security ... was asked to run around with the new CSO
... providing some detail about computer security (and a little bit of
physical security rubbing off) ... before the incident involving CEO's
sailing buddy and first install of clone processor in true blue
commercial account.

for other drift ... I didn't learn about these guys until later
http://web.archive.org/web/20090117083033/http://www.nsa.gov/research/selinux/list-archive/0409/8362.shtml

for related drift ... recent post mentioning HSDT & link encryptors
http://www.garlic.com/~lynn/2014i.html#49 Sale receipt--obligatory

I really hated what had to pay for T1 link encryptors (and was
effectively near impossible to getting anything faster) ... and got
involved in doing our own. Objective was under $100 to produce and
handle at least 3mbyte/sec (not 3mbit/sec). Initially the corporate
crypto group said it significantly reduced standard crypto strength. It
took me 3months to figure out how to explain to them what was going on
(significantly increased standard crypto strength). It was hollow
victory ... got told could build as many as I wanted ... but they all
had to be shipped to address in maryland (and I couldn't use any). It
was when I realized there was three kinds of crypto: 1) the kind they
don't care about, 2) the kind you can't do, 3) the kind you can only do
for them.

past posts mentioning 3kinds crypto:
http://www.garlic.com/~lynn/2008h.html#87 New test attempt
http://www.garlic.com/~lynn/2008i.html#86 Own a piece of the crypto wars
http://www.garlic.com/~lynn/2009p.html#32 Getting Out Hard Drive in Real Old Computer
http://www.garlic.com/~lynn/2010i.html#27 Favourite computer history books?
http://www.garlic.com/~lynn/2010o.html#43 Internet Evolution - Part I: Encryption basics
http://www.garlic.com/~lynn/2010p.html#19 The IETF is probably the single element in the global equation of technology competition than has resulted in the INTERNET
http://www.garlic.com/~lynn/2011g.html#20 TELSTAR satellite experiment
http://www.garlic.com/~lynn/2011g.html#60 Is the magic and romance killed by Windows (and Linux)?
http://www.garlic.com/~lynn/2011g.html#69 Is the magic and romance killed by Windows (and Linux)?
http://www.garlic.com/~lynn/2011h.html#0 We list every company in the world that has a mainframe computer
http://www.garlic.com/~lynn/2011n.html#63 ARPANET's coming out party: when the Internet first took center stage
http://www.garlic.com/~lynn/2011n.html#85 Key Escrow from a Safe Distance: Looking back at the Clipper Chip
http://www.garlic.com/~lynn/2012.html#63 Reject gmail
http://www.garlic.com/~lynn/2012i.html#70 Operating System, what is it?
http://www.garlic.com/~lynn/2012k.html#47 T-carrier
http://www.garlic.com/~lynn/2013d.html#1 IBM Mainframe (1980's) on You tube
http://www.garlic.com/~lynn/2013g.html#31 The Vindication of Barb
http://www.garlic.com/~lynn/2013i.html#69 The failure of cyber defence - the mindset is against it
http://www.garlic.com/~lynn/2013k.html#77 German infosec agency warns against Trusted Computing in Windows 8
http://www.garlic.com/~lynn/2013k.html#88 NSA and crytanalysis
http://www.garlic.com/~lynn/2013m.html#10 "NSA foils much internet encryption"
http://www.garlic.com/~lynn/2013o.html#50 Secret contract tied NSA and security industry pioneer
http://www.garlic.com/~lynn/2014.html#9 NSA seeks to build quantum computer that could crack most types of encryption
http://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
http://www.garlic.com/~lynn/2014e.html#25 Is there any MF shop using AWS service?
http://www.garlic.com/~lynn/2014e.html#27 TCP/IP Might Have Been Secure From the Start If Not For the NSA
--
virtualization experience starting Jan1968, online at home since Mar1970
Jon Elson
2014-07-24 20:52:05 UTC
Reply
Permalink
Post by Dan Espen
Post by Anne & Lynn Wheeler
shortly after joining IBM ... I guess I started to be a problem ...
during the "Future System" period, I refused to work on FS, continued
to work on 370 and would even periodically ridicule FS
You weren't the problem. IBM management was the problem.
As history has proved, you were right all along.
The specs for FS were totally insane, for the technology available
at the time (Motorola 10K ECL or any equivalent). So, should
FS have been canceled as it could NEVER reach the goal, or kept
alive, as it would have been a very powerful machine? Was it
an all-out attempt to make a supercomputer which would sell maybe
less than a dozen units? Or, was it the basis of the next generation
of IBM mainframes?

The 370 series was a practical architecture, although the performance
of some of the lower models seems like it must have been intentionally
crippled to not interfere with the /15x and /16x machine.

Jon
Anne & Lynn Wheeler
2014-07-24 22:39:31 UTC
Reply
Permalink
Post by Jon Elson
The specs for FS were totally insane, for the technology available
at the time (Motorola 10K ECL or any equivalent). So, should
FS have been canceled as it could NEVER reach the goal, or kept
alive, as it would have been a very powerful machine? Was it
an all-out attempt to make a supercomputer which would sell maybe
less than a dozen units? Or, was it the basis of the next generation
of IBM mainframes?
The 370 series was a practical architecture, although the performance
of some of the lower models seems like it must have been intentionally
crippled to not interfere with the /15x and /16x machine.
re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test

FS specs had a lot of blue sky ideas ... some of them not even having
any idea about how they might be implemented. since it was suppose to
completely replace 370 ... internal politics during the period was
suspended and/or killing off 370 efforts. some past posts
http://www.garlic.com/~lynn/submain.html#futuresys

some other refs:

Discussion of old FS evaluation
http://www.jfsowa.com/computer/memo125.htm
FS description and discussion
http://people.cs.clemson.edu/~mark/fs.html
wiki entry
http://en.wikipedia.org/wiki/IBM_Future_Systems_project

FS design/architecture was divided into something like 13
sections/areas. My wife worked for head of one of the sections and had
some responsibility for dealing with other sections ... and was
repeatedly surprised/astounded by the lack of any substance backing up
some of their fantasies.

part of FS was sort of object with potentially five levels of
indirection (& storage access; aka an "hardware" ADD instruction which
would handle whether the operands were decimal, floating point, integer,
etc ... or even the same). one of the final nails in FS coffin was study
by the (IBM) Houston science center ... that if a FS machine was made
out of the fastest available hardware ... and an application from
370/195 was moved over to it ... it would have throughput of 370/145
(about factor of 30 times slowdown).

another feature was it was to be "single level store" architecture
... somewhat carried over from tss/360. at the univ. I got to play with
cp67/cms on weekends and sometimes had to share the machine with IBM SE
playing with TSS/360. At one point we did synthetic benchmark for
Fortran edit, compile, link and execute. I got better throughput and
interactive response for 35 simulated users on cp67/cms than he did for
four simulated users on tss/360 (with exact same hardware). I've
periodically claimed that a lot of what i did for cp67/cms paged-mapped
filesystem in the early 60s took into account of "what not to do" from
observing tss/360 (i could easily get three times the native cp67/cms
filesystem throughput). this contributed to my references to
periodically ridiculing the FS efforts (and continued to work on 360 and
then moving to vm370/cms ... during the FS period). posts mentions doing
cp67/cms paged-mapped filesystem
http://www.garlic.com/~lynn/submain.html#mmap
also part of recent discussion over in ibm-main
http://www.garlic.com/~lynn/2014i.html#66 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014i.html#67 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014i.html#68 z/OS physical memory usage with multiple copies of same load module at different virtual addresses

This goes into major motivation for FS was countermeasure to clone
controllers ... that FS would have such tight integration between
processor and controllers that it would make it extremely difficult for
clones to keep up (but much of the actual specification to accomplish
that was totally lacking)
http://www.ecole.org/en/seances/CM07
other posts mentioning clone controller work
http://www.garlic.com/~lynn/subtopic.html#360pcm

A related subject is the end of ACS/360 (which also gets into tiered
processor performance)
http://people.cs.clemson.edu/~mark/acs_end.html

mentions that it was killed because management was afraid that it would
advance the state of the art too fast and they would loose control of
the market. at the end of above, it goes into some of acs/360 features
finally showing up more than 20yrs later in es/9000.

the person responsible leaves and starts his own clone processor
company. accounts of the lack of 370 products during the FS period is
then credited with giving clone processors a market foothold. This
recent post (in this thread) mentions that it was initially with univ. &
scientific ... before breaking into the true blue commercial market.
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test

the folklore is that some of the FS people retreat to Rochester and do
the system/38 ... significantly simplifying a lot of FS features ...
and not having to worry about throughput in the market that they were
selling to. For instance one of the simplifications was that they
treated all connected disks as a common storage pool for single system
filesystem (with any file potentially having scatter allocation across
all available disks). As a result, everything had to be backed up as an
integral whole. A common failure of the time was single disk failure
... but because of the common storage pool paradigm ... the one disk
would be replaced ... and then a complete system restore would be needed
(could easily take 24hrs elapsed time).
http://en.wikipedia.org/wiki/IBM_System/38
and
http://www-03.ibm.com/ibm/history/exhibits/rochester/rochester_4009.html

the followon was as/400 which was replacement for s/34, s/36 and s/38
(and dropped some of the s/38 FS features).
http://en.wikipedia.org/wiki/IBM_System_i
--
virtualization experience starting Jan1968, online at home since Mar1970
Quadibloc
2014-07-24 22:51:25 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
the folklore is that some of the FS people retreat to Rochester and do
the system/38 ... significantly simplifying a lot of FS features ...
and not having to worry about throughput in the market that they were
selling to.
Well, the AS/400 and such did appear to include some of the features and philosophy associated with the Future System. So, while FS was too ambitious for its time, some of its basic ideas were sound enough to be worth keeping.

The IBM 360/85, despite performing well, thanks to cache, was a poor seller, but that didn't stop the 370/165 and the 3033 from being based on its microarchitecture.

It's wasteful to throw stuff away if it can still be used.

John Savard
Anne & Lynn Wheeler
2014-07-24 23:35:53 UTC
Reply
Permalink
Post by Quadibloc
Well, the AS/400 and such did appear to include some of the features
and philosophy associated with the Future System. So, while FS was too
ambitious for its time, some of its basic ideas were sound enough to
be worth keeping.
The IBM 360/85, despite performing well, thanks to cache, was a poor
seller, but that didn't stop the 370/165 and the 3033 from being based
on its microarchitecture.
It's wasteful to throw stuff away if it can still be used.
re:
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test

FS threw in nearly every idea from computer & academic literature of the
period ... even if they had absolutely no idea what it met and/or how to
implement (little or nothing original with FS). It is not surprising
that some of it was eventually made to work (on the other hand, lots of
it would never work ... but they had little idea had to differentiate
the two, goes way beyond "too ambitious for its time")

165 to 168 was moving from 2mic memory to less than 1/2mic memory and
optimizing the microcode so they reduced 370 instruction emulation from
2.1 machine cycles to 1.6 machine cycles per 370 instruction.

168-1 to 168-3 was doubling the cache size from 16kbytes to 32kbytes.

168-3 to 3033 started out being 168-3 design mapped to 20% faster chips
... some other stuff eventually got 3033 up to 1.5times the 168-3.

note that both 3033 and 3081 were concurrently, part of the mad rush
after the failure of Future System, to get stuff back into 370 product
pipeline (using warmed over 370 technology) ... more here (and compared
poorly with clone processors):
http://www.jfsowa.com/computer/memo125.htm

note that all during this period the manufacturing costs for 370/158 was
at the knee of the (POK high-end) cost/performance curve. it was one
reason why the 370/158 engine was selected for the 303x channel
director.

however, the 4341 using even newer technology came in at even lower
better knew of the manufacturing cost/performance curve. while 4341 was
individually faster than 3033 ... clusters of 4341 beat 3033 on every
metic (aggregate performance, floor space, price/performance,
environmentals, etc). at one point the head of pok was so threatened by
4341 threat to 3033 that at one point, he managed to get the allocation
of a critical 4341 manufacturing component cut in half.

clusters of 4341s beat 3033 in datacenters ... as well as being the
leading edge of distributed computing tsunami ... large corporations
installing hundreds at a time out in departmental areas (departmental
conference rooms inside ibm became a scarce commodity because of being
taken over by 4341s).

old 4300 email
http://www.garlic.com/~lynn/lhwemail.html#43xx

i've frequently commented that John may have done 801/risc to be the
exact opposite of FS complexity ... including FS high level abstraction
with enormous processing required in the microcode below the instruction
interface (including large number of storage references to resolve each
instruction operand, aka the reference building FS machine out of
370/195 technology results in factor of 30 times slowdown). Now almost
every production architecture is either RISC or CISC with hardware level
layer that translates instructions into RISC micro-ops for actual
execution.
--
virtualization experience starting Jan1968, online at home since Mar1970
Anne & Lynn Wheeler
2014-07-25 14:37:01 UTC
Reply
Permalink
re:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test

related post this morning over in ibm-main
http://www.garlic.com/~lynn/2014i.html#71 z/OS physical memory usage with multiple copies of same load module at different virtual addresses

mentioning single-level-store (not just s/38) ... both tss/360 and this
multics reference
http://en.wikipedia.org/wiki/Multics

from above:

Multics implemented a single level store for data access, discarding the
clear distinction between files (called segments in Multics) and process
memory. The memory of a process consisted solely of segments which were
mapped into its address space. To read or write to them, the process
simply used normal CPU instructions, and the operating system took care
of making sure that all the modifications were saved to disk. In POSIX
terminology, it was as if every file was mmap()ed; however, in Multics
there was no concept of process memory, separate from the memory used to
hold mapped-in files, as Unix has. All memory in the system was part of
some segment, which appeared in the file system; this included the
temporary scratch memory of the process, its kernel stack, etc.

... snip ...

the s/38 common filesystem pool scaled poorly ... just having to
save/restore all data as single integral whole, was barely tolerable
with a few disks ... but large mainframe system with 300 disks would
require days for the operation.

other recent posts mentioning s/38
http://www.garlic.com/~lynn/2014b.html#11 Mac at 30: A love/hate relationship from the support front
http://www.garlic.com/~lynn/2014b.html#68 Salesmen--IBM and Coca Cola
http://www.garlic.com/~lynn/2014b.html#84 CPU time
http://www.garlic.com/~lynn/2014c.html#75 Bloat
http://www.garlic.com/~lynn/2014c.html#76 assembler
http://www.garlic.com/~lynn/2014e.html#7 Last Gasp for Hard Disk Drives
http://www.garlic.com/~lynn/2014e.html#48 Before the Internet: The golden age of online service
http://www.garlic.com/~lynn/2014e.html#50 The mainframe turns 50, or, why the IBM System/360 launch was the dawn of enterprise IT
http://www.garlic.com/~lynn/2014e.html#53 The mainframe turns 50, or, why the IBM System/360 launch was the dawn of enterprise IT
http://www.garlic.com/~lynn/2014g.html#96 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
http://www.garlic.com/~lynn/2014g.html#97 IBM architecture, was Fifty Years of nitpicking definitions, was BASIC,theProgrammingLanguageT
http://www.garlic.com/~lynn/2014i.html#9 With hindsight, what would you have done?
--
virtualization experience starting Jan1968, online at home since Mar1970
Dan Espen
2014-07-25 17:14:21 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
the s/38 common filesystem pool scaled poorly ... just having to
save/restore all data as single integral whole, was barely tolerable
with a few disks ... but large mainframe system with 300 disks would
require days for the operation.
You've said this many times.
Makes no sense to me at all.
When we backed up our data on an IBM System 34,
we backed up the application data.
It didn't matter at all what volume the data was on.

Trying to back up everything on disk would be
a huge waste of time besides being useless for an
actual restore. We needed to run our daily application
cycles to completion, then back up the application.
This would not necessarily take place all at once.
Each application got backed up.
--
Dan Espen
Shmuel (Seymour J.) Metz
2014-07-25 19:08:40 UTC
Reply
Permalink
Post by Dan Espen
Trying to back up everything on disk would be
a huge waste of time besides being useless for an
actual restore.
WTF? At NSF we routinely backed up everything on a regular basis, with
incremental backups in between.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Dan Espen
2014-07-25 19:54:39 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
Post by Dan Espen
Trying to back up everything on disk would be
a huge waste of time besides being useless for an
actual restore.
WTF? At NSF we routinely backed up everything on a regular basis, with
incremental backups in between.
Hey, I can only tell you what we did for backup on our System/34
systems.

And, it worked well.

Thinking some more, I suppose some mainframe applications
don't really have a daily cycle and backup point.
--
Dan Espen
Anne & Lynn Wheeler
2014-07-25 20:27:55 UTC
Reply
Permalink
Trying to back up everything on disk would be a huge waste of time
besides being useless for an actual restore. We needed to run our
daily application cycles to completion, then back up the application.
This would not necessarily take place all at once. Each application
got backed up.
re:
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test

common failure mode was single disk failure. because of scatter
allocation ... all files could have pieces across all disks ... a
single disk failure resulted in impacting *ALL* files ... required
restoring everything from scratch just to get a running system ... all
system files and all user files (nothing could be salvaged from
non-failed disks since arbitrary file pieces would be missing).

guy that i sometimes worked with when I got to play disk engineer
over in bldgs 14/15
http://www.garlic.com/~lynn/subtopic.html#disk

filed original patent for raid in 1977
http://en.wikipedia.org/wiki/RAID

i never actually operated s/38 ... but was told several times that the
operational restore problems for s/38 with single disk failure was
sufficiently traumatic that it motivated s/38 to ship the first raid
support.

other posts in thread
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
--
virtualization experience starting Jan1968, online at home since Mar1970
Dan Espen
2014-07-25 20:54:56 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Trying to back up everything on disk would be a huge waste of time
besides being useless for an actual restore. We needed to run our
daily application cycles to completion, then back up the application.
This would not necessarily take place all at once. Each application
got backed up.
common failure mode was single disk failure. because of scatter
allocation ... all files could have pieces across all disks ... a
single disk failure resulted in impacting *ALL* files ... required
restoring everything from scratch just to get a running system ... all
system files and all user files (nothing could be salvaged from
non-failed disks since arbitrary file pieces would be missing).
Oops, you're right.
I'm thinking too small.

I'm not sure how many hard disks there even were in the System/34
and our backup options were limited to magazines of diskettes.
I think a magazine held 10 of the 8 inch floppies.
We couldn't back up the whole system if we wanted to.
Well, we could but it would take forever and a bunch of
magazine changes.

Our CE once mentioned, we should take a look at the disk error
statistics on the system. After operating the system for years
the counters were still at zero.

He said all his systems were like that.

So, I guess we're destined to never see single level storage
on a large scale system. Even if AS/400 users manage to get by.
--
Dan Espen
Anne & Lynn Wheeler
2014-07-25 21:44:40 UTC
Reply
Permalink
Post by Dan Espen
So, I guess we're destined to never see single level storage
on a large scale system. Even if AS/400 users manage to get by.
re:
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test

problem wasn't directly single level storage ... it was that s/38
simplified the infrastructure management by treating all disks as a
common allocation pool ... and just doing scatter allocation.

also RAID can go a long way to masking single disk failures.

vm370 spool had a similar/analogous problem ... doing scatter allocation
and treating all spool areas as common pool. thi wasn't bad for early
configurations with spool on single disk ... but increasingly became a
problem as configurations scaled up. if any disk failed ... all spool
files were lost. vm370 spool had other issues, it had checkpoint for
clean shutdown ... allowing relative fast restart. However, if it did
have clean shutdown, it required a warm restart ... which could require
30-60 minutes for large configuration ... and while vm370 would do
automatic restart in well under 5mins normally ... it waited on spool
being up before restart finished ... so system was unavailable during
long warm restart.

i've mentioned before that I had a throughput issue in HSDT
http://www.garlic.com/~lynn/subnetwork.html#hsdt
old hsdt email
http://www.garlic.com/~lynn/lhwemail.html#hsdt

with vm370 spool because vm370 RSCS/VNET used spool for storage. It used
a synchronous 4k (page) block read/write interface ... so was serialized
while it waiting for disk transfer. With other activity in system also
using spool system, RSCS/VNET might be limited to 5-8 4k block
transfer/sec (20k-30k/sec, something that might be ween with a couple
full-duplex 56kbit links). HSDT had multiple full-duplex T1 (and faster)
links (and while supporting TCP/IP, also ran RSCS/VNET) ... a
full-duplex T1 requires 300kbytes/sec sustained.

So for HSDT, i decided to rewrite spool to allow RSCS/VNET to get
upwards of 1mbyte/sec-3mbyte/sec spool sustained throughput. This
required asynchronous 4k block transfer interface ... with contiguous
allocation, multiple block transfers, write behind and read
aheads. Contiguous allocation had option for drive affinity (all blocks
on same disk). I also did mechanism so vm370 could be up and available
before spool file recovery was complete ... and warm start ran
enormously faster (in case of non-clean checkpoint). Also supported
moving all data off target drive concurrent running of live system as
part of taking drive offline for maintenance as well as adding drives on
the fly (somewhat akin later done for some hardware RAID subsystems as
part recovery).

this is old email trying to get the spool changes into the internal
network "backbone" nodes that were starting to have multiple 56kbit
links. however, at this time, the communication group was on a
misinformation campaign to convince the corporation to covernt
the internal network to SNA (internal network meetings change to
exclude technical people and only involve management)
http://www.garlic.com/~lynn/2011.html#email8703006

other old vnet/rscc email
http://www.garlic.com/~lynn/lhwemail.html#vnet

past internal network posts
http://www.garlic.com/~lynn/subnetwork.html#internalnet

I did majority of spool changes writting in vs/pascal running in
virtual address spaces ... and with some slight of hand programming
tricks ... pathlength ran faster than assembler code running as
part of kernel ... some past posts
http://www.garlic.com/~lynn/2000b.html#43 Migrating pages from a paging device (was Re: removal of paging device)
http://www.garlic.com/~lynn/2002b.html#44 PDP-10 Archive migration plan
http://www.garlic.com/~lynn/2003k.html#26 Microkernels are not "all or nothing". Re: Multics Concepts For
http://www.garlic.com/~lynn/2003k.html#63 SPXTAPE status from REXX
http://www.garlic.com/~lynn/2004g.html#19 HERCULES
http://www.garlic.com/~lynn/2004p.html#3 History of C
http://www.garlic.com/~lynn/2005d.html#38 Thou shalt have no other gods before the ANSI C standard
http://www.garlic.com/~lynn/2005s.html#28 MVCIN instruction
http://www.garlic.com/~lynn/2006.html#35 Charging Time
http://www.garlic.com/~lynn/2007c.html#21 How many 36-bit Unix ports in the old days?
http://www.garlic.com/~lynn/2007g.html#45 The Complete April Fools' Day RFCs
http://www.garlic.com/~lynn/2008g.html#22 Was CMS multi-tasking?
http://www.garlic.com/~lynn/2009h.html#63 Operating Systems for Virtual Machines
http://www.garlic.com/~lynn/2009o.html#12 Calling ::routines in oorexx 4.0
http://www.garlic.com/~lynn/2010k.html#26 Was VM ever used as an exokernel?
http://www.garlic.com/~lynn/2010k.html#35 Was VM ever used as an exokernel?
http://www.garlic.com/~lynn/2011e.html#25 Multiple Virtual Memory
http://www.garlic.com/~lynn/2011e.html#29 Multiple Virtual Memory
http://www.garlic.com/~lynn/2012g.html#18 VM Workshop 2012
http://www.garlic.com/~lynn/2012g.html#23 VM Workshop 2012
http://www.garlic.com/~lynn/2012g.html#24 Co-existance of z/OS and z/VM on same DASD farm
http://www.garlic.com/~lynn/2013n.html#91 rebuild 1403 printer chain
--
virtualization experience starting Jan1968, online at home since Mar1970
Shmuel (Seymour J.) Metz
2014-07-28 02:11:59 UTC
Reply
Permalink
Post by Dan Espen
our backup options were limited to magazines of diskettes.
That explains a lot; mainframe backups were on tape, and one reel[1]
held as much as hundreds of 8" floppies. If we had to use floppies
then backup would have been a nightmare.
Post by Dan Espen
So, I guess we're destined to never see single level storage on a
large scale system.
Perhaps not, but I don't see backups as being an obstacle.

[1] The later cartridges, of course, held more.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Dan Espen
2014-07-29 04:33:52 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
Post by Dan Espen
our backup options were limited to magazines of diskettes.
That explains a lot; mainframe backups were on tape, and one reel[1]
held as much as hundreds of 8" floppies. If we had to use floppies
then backup would have been a nightmare.
All a question of how much data you need to back up.
I remember now, the Sys/34 had 2 magazine slots.
So we put 20 diskettes into the machine, by loading
2 magazines.

You could allocate with the control language using an ID like
M1.03 (magazine 1, diskette 3). When I left, we had all 20
diskettes in use, but not more than 50% full.
--
Dan Espen
Anne & Lynn Wheeler
2014-07-29 12:41:43 UTC
Reply
Permalink
Post by Dan Espen
All a question of how much data you need to back up.
I remember now, the Sys/34 had 2 magazine slots.
So we put 20 diskettes into the machine, by loading
2 magazines.
You could allocate with the control language using an ID like
M1.03 (magazine 1, diskette 3). When I left, we had all 20
diskettes in use, but not more than 50% full.
ibm invented 8in floppies to use for loading microcode for 3830 disk
control unit ... then were used in various 370 models for loading
microcode for processors.
http://en.wikipedia.org/wiki/History_of_the_floppy_disk

string 8 3330 drives, eight removable 100mbyte disks, upgraded to double
capacity 200mbytes/disks (808 tracks up from 404 tracks)
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3330.html

a string of eight 3330s could connect directly to a 3830 controller or
to a string switch ... and a string switch could connect to two
different 3830s controllers.

3830 had two channel interface, allowing connecting to two different
370s concurrently.

using string switch, it was possible to access 3330 from up to four
different 370s. was possible to add a 2nd two channel interface
to 3830 allowing connection to up to eight 370s
http://bitsavers.trailing-edge.com/pdf/ibm/dasd/3330/GA26-1592-5_Reference_Manual_for_IBM_3830_Storage_Control_Model_1_and_IBM_3330_Disk_Storage_Nov76.pdf
and over on wayback machine
https://archive.org/details/bitsavers_ibm38xx383efApr72_6929160

these were removable disks ... so installation might have much larger
number of (200mbyte) disks that there were drives.

IBM also did 3850 that had some number of 3330 drives connected to
automated cartridge library that could move data back and forth between
cartridge and disk
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850.html
which could have up to 4720 tape cartridges
http://www-03.ibm.com/ibm/history/exhibits/storage/storage_3850b.html


there were a number of complexes like lockheed dialog ... an early
online system circa 1980 that had 300 drives that were connected to two
different 370 processors at datacenter in silicon valley
http://www.historyofinformation.com/expanded.php?id=1069
http://en.wikipedia.org/wiki/Roger_K._Summit
http://en.wikipedia.org/wiki/Dialog_%28online_database%29

online before the internet
http://www.infotoday.com/searcher/jun03/ardito_bjorner.shtml
Roger Summit
http://www.infotoday.com/searcher/oct03/SummitWeb.shtml

also (dialog sold to proquest and old URLs gone 404)
https://web.archive.org/web/20140327061241/http://dialog.com/about/history/
and
http://web.archive.org/web/20121011155818/http://support.dialog.com/publications/chronolog/200206/1020628.shtml


past posts mentioning dialog
http://www.garlic.com/~lynn/99.html#150 Q: S/390 on PowerPC?
http://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#46 The Alpha/IA64 Hybrid
http://www.garlic.com/~lynn/2001m.html#51 Author seeks help - net in 1981
http://www.garlic.com/~lynn/2002g.html#3 Why are Mainframe Computers really still in use at all?
http://www.garlic.com/~lynn/2002h.html#0 Search for Joseph A. Fisher VLSI Publication (1981)
http://www.garlic.com/~lynn/2002l.html#53 10 choices that were critical to the Net's success
http://www.garlic.com/~lynn/2002l.html#61 10 choices that were critical to the Net's success
http://www.garlic.com/~lynn/2002m.html#52 Microsoft's innovations [was:the rtf format]
http://www.garlic.com/~lynn/2002n.html#67 Mainframe Spreadsheets - 1980's History
http://www.garlic.com/~lynn/2003e.html#66 History of project maintenance tools -- what and when?
http://www.garlic.com/~lynn/2006b.html#38 blast from the past ... macrocode
http://www.garlic.com/~lynn/2006n.html#55 The very first text editor
http://www.garlic.com/~lynn/2007k.html#60 3350 failures
http://www.garlic.com/~lynn/2009m.html#88 Continous Systems Modelling Package
http://www.garlic.com/~lynn/2009q.html#24 Old datasearches
http://www.garlic.com/~lynn/2009q.html#44 Old datasearches
http://www.garlic.com/~lynn/2009q.html#46 Old datasearches
http://www.garlic.com/~lynn/2009r.html#34 70 Years of ATM Innovation
http://www.garlic.com/~lynn/2010j.html#55 Article says mainframe most cost-efficient platform
http://www.garlic.com/~lynn/2011j.html#47 Graph of total world disk space over time?
http://www.garlic.com/~lynn/2014e.html#39 Before the Internet: The golden age of online services
--
virtualization experience starting Jan1968, online at home since Mar1970
Anne & Lynn Wheeler
2014-07-29 13:30:15 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
ibm invented 8in floppies to use for loading microcode for 3830 disk
control unit ... then were used in various 370 models for loading
microcode for processors.
http://en.wikipedia.org/wiki/History_of_the_floppy_disk
re:
http://www.garlic.com/~lynn/2014i.html#90 IBM Programmer Aptitude Test

development for the 3880 disk controller had some problems. the
microcode development system was this large application running on MVS
system with limited turn-around. the idea was to get the microcode
development system ported off MVS to vm370/cms and moved out to 4341s in
the departmental areas ... eliminating the datacenter bottleneck.

the other bottleneck was there was a limited number of floppy disk
writters. the floppy disk drives in disk controllers were purely
read/only ... the solution was to get some number of floppy r/w drives
to go along with the port of the development system to vm370/cms to
significantly improve development turn-around and productivity.

old email referencind moving MDB/MDS from MVS to vm370/cms and getting
r/w floppy drives
http://www.garlic.com/~lynn/2006v.html#email791010c
in this post
http://www.garlic.com/~lynn/2006v.html#17 Ranking of non-IBM mainframe builders

and this email about getting MDB/MDS moved to vm370/cms
http://www.garlic.com/~lynn/2006p.html#email810128
in this post
http://www.garlic.com/~lynn/2006p.html#40 25th Anniversary of the Personal Computer

other old 4341 email
http://www.garlic.com/~lynn/lhwemail.html#4341

when I transferred to San Jose Research, they let me wander around other
locations in the San Jose area, one of the places was the disk
engineering lab. at the time they were doing development testing using
dedicated, stand-alone mainframe processing time, prescheduled 7x24
around the clock. At one time they had attempted to use MVS for
concurrent testing, however in that environment MVS had 15min
mean-time-between-failure ... requiring manual restart of MVS. I offered
to redo i/o supervisor to make it bullet proof and never fail
... allowing any number of on-demand, concurrent testing (greatly
improving productivity). after that they would periodically drag me in
to look at other issues. past posts getting to play disk engineer in
bldgs. 14&15
http://www.garlic.com/~lynn/subtopic.html#disk

later as getting close to ship 3880 disk controllers ... field engineer
had regression testing of 57 typically expected test ... old email ref
http://www.garlic.com/~lynn/2007.html#email801015
in this post
http://www.garlic.com/~lynn/2007.html#2 The Elements of Programming Style

MVS was still failing (requiring manual restart) for all 57 cases and in
2/3rds of cases, after restart there was no indication of what caused
the failure.

other posts in this thread:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#78 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#79 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#87 IBM Programmer Aptitude Test
--
virtualization experience starting Jan1968, online at home since Mar1970
Shmuel (Seymour J.) Metz
2014-07-29 14:36:17 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
ibm invented 8in floppies to use for loading microcode for 3830 disk
control unit ... then were used in various 370 models for loading
microcode for processors.
Also used for running diagnostics, at least on the 3155.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Charlie Gibbs
2014-07-31 14:27:56 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
Post by Dan Espen
our backup options were limited to magazines of diskettes.
That explains a lot; mainframe backups were on tape, and one reel[1]
held as much as hundreds of 8" floppies. If we had to use floppies
then backup would have been a nightmare.
Sperry pushed for floppy-based software distribution on their
OS/3-based System 80 line. Yes, it was a nightmare, stuffing
45 floppies into the drive (even if we had the auto-feeding
drive, which we lovingly referred to as the "autocruncher").
It was even worse if you had a read error on disk 26, when
the update procedure offered no retry option. Not that that
would help if the error was, as you found out a couple of
hours later, unrecoverable. Once or twice we took a copy
of the bad floppy from another customer's set, even though
that was theoretically a firing offence.
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
h***@bbs.cpcn.com
2014-07-31 14:47:53 UTC
Reply
Permalink
Post by Charlie Gibbs
Once or twice we took a copy
of the bad floppy from another customer's set, even though
that was theoretically a firing offence.
Why was that an offense? That was standard procedure in industry. It wasn't stealing a license or anything, just making a copy of something legitimate for a legitimate purpose.
Charlie Gibbs
2014-07-31 17:20:52 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Charlie Gibbs
Once or twice we took a copy
of the bad floppy from another customer's set, even though
that was theoretically a firing offence.
Why was that an offense? That was standard procedure in industry.
It wasn't stealing a license or anything, just making a copy of
something legitimate for a legitimate purpose.
True, but this was at the time when people were first getting really
weird about software piracy. If we were to toe the line from HQ,
we'd be telling customers they'd be dead in the water for several
days while a new official copy was cut and delivered. This was
clearly unacceptable, so "don't ask, don't tell" became the
watchword in the trenches.
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
Morten Reistad
2014-07-31 21:59:31 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by Shmuel (Seymour J.) Metz
Post by Dan Espen
our backup options were limited to magazines of diskettes.
That explains a lot; mainframe backups were on tape, and one reel[1]
held as much as hundreds of 8" floppies. If we had to use floppies
then backup would have been a nightmare.
And a memory stick holds hundreds of magtapes.

Notes to readers about what kinds of capacities we are talking
about :

An 8" single side single density floppy holds 237 1/4 kilobytes,
ex file system overhead, available in 1972. A 3.5" ED diskette
holds 2.88 megabytes, available in 1987.

A 2400' tape at 1600 TPI holds 45000 kilobytes unformatted,
about 22-24 megabytes with 2k blocks. This was the reference,
"exchange mode" of tapes for a few decades. A 3600' 6250 TPI
tape, the biggest there is in practical 9" tape, has an unformatted
storage of 270 million bytes, and a formatted storage at 2k
blocksize of 140-150 megabytes. Formatted space is a range because
of the variability of the inter record gaps.

This is a ratio of 189:1 at the low end and 92:1 at the high end.

Now we find 8G thumb drives in the Walmart bays, and 128G
in the specialist stores. Subtract around 4.5% for a decent
file system with lots of small files.

This is a ratio of 177:1 at the low end and 485:1 at the high end.
Post by Charlie Gibbs
Sperry pushed for floppy-based software distribution on their
OS/3-based System 80 line. Yes, it was a nightmare, stuffing
45 floppies into the drive (even if we had the auto-feeding
drive, which we lovingly referred to as the "autocruncher").
It was even worse if you had a read error on disk 26, when
the update procedure offered no retry option. Not that that
would help if the error was, as you found out a couple of
hours later, unrecoverable. Once or twice we took a copy
of the bad floppy from another customer's set, even though
that was theoretically a firing offence.
I still have the telex from Prime computers dating November 1985
confirming the order of a 315 mb winchester drive, a 16 line
async rs232 (amlc) card and 2 megabytes of memory for NOK
465 000; then around USD 85k.

I don't miss those times at all. But I would like to recover
more software.

-- mrr
Ibmekon
2014-08-01 08:08:04 UTC
Reply
Permalink
Post by Morten Reistad
Post by Charlie Gibbs
Post by Shmuel (Seymour J.) Metz
Post by Dan Espen
our backup options were limited to magazines of diskettes.
That explains a lot; mainframe backups were on tape, and one reel[1]
held as much as hundreds of 8" floppies. If we had to use floppies
then backup would have been a nightmare.
And a memory stick holds hundreds of magtapes.
Notes to readers about what kinds of capacities we are talking
An 8" single side single density floppy holds 237 1/4 kilobytes,
ex file system overhead, available in 1972. A 3.5" ED diskette
holds 2.88 megabytes, available in 1987.
A 2400' tape at 1600 TPI holds 45000 kilobytes unformatted,
about 22-24 megabytes with 2k blocks. This was the reference,
"exchange mode" of tapes for a few decades. A 3600' 6250 TPI
tape, the biggest there is in practical 9" tape, has an unformatted
storage of 270 million bytes, and a formatted storage at 2k
blocksize of 140-150 megabytes. Formatted space is a range because
of the variability of the inter record gaps.
This is a ratio of 189:1 at the low end and 92:1 at the high end.
Now we find 8G thumb drives in the Walmart bays, and 128G
in the specialist stores. Subtract around 4.5% for a decent
file system with lots of small files.
This is a ratio of 177:1 at the low end and 485:1 at the high end.
Post by Charlie Gibbs
Sperry pushed for floppy-based software distribution on their
OS/3-based System 80 line. Yes, it was a nightmare, stuffing
45 floppies into the drive (even if we had the auto-feeding
drive, which we lovingly referred to as the "autocruncher").
It was even worse if you had a read error on disk 26, when
the update procedure offered no retry option. Not that that
would help if the error was, as you found out a couple of
hours later, unrecoverable. Once or twice we took a copy
of the bad floppy from another customer's set, even though
that was theoretically a firing offence.
I still have the telex from Prime computers dating November 1985
confirming the order of a 315 mb winchester drive, a 16 line
async rs232 (amlc) card and 2 megabytes of memory for NOK
465 000; then around USD 85k.
I don't miss those times at all. But I would like to recover
more software.
-- mrr
I wonder whether you mean recover for archaelogical reasons - or use
the coding and concepts afresh ?

September last year I installed Linux Mint XFCE, keeping Win XP in a
VirtualBox.
Yesterday I asked it to apt-get me a package - it just sat there. It
seems that it feels neglected, a new makeover is available. On
checking it is considered ill advised to try to update my version - a
new installation is recommended.

Isnt this where I came in ?.
Here we have a shiny new car. Can you fit any parts into your old car
?
Of course not. Can you fit any parts from you old car into your shiny
new one ?.
Well, maybe if someone repaints them to fit the color scheme.

It is about time we had software that can be recycled. Apart from the
copywright notice.


Carl Goldsworthy
--
You can fool some of the people all of the time, and all of the people
some of the time.
Abraham Lincoln, (attributed) 16th president of US (1809 - 1865).

You can fool the majority of the people for just one day, and do as
you please for 5 years.
Me
Shmuel (Seymour J.) Metz
2014-08-03 01:17:00 UTC
Reply
Permalink
Post by Morten Reistad
And a memory stick holds hundreds of magtapes.
ITYM that a 2014 memory stick holds hundreds of 1974 magtapes. Todays
tape drives, at least on mainframes, are much larger.
Post by Morten Reistad
A 2400' tape at 1600 TPI holds 45000 kilobytes unformatted, about
22-24 megabytes with 2k blocks.
TPI?

Figure 460 MB, with block sizes larger than 2KB being the norm.
Post by Morten Reistad
This was the reference, "exchange mode" of tapes for a few
decades.
A decades and a half, two at most. There was still a lot of
instrumentation using 1600 BPI after that, but 6250 was the norm for
data exchange between mainframes.
Post by Morten Reistad
Now we find 8G thumb drives
Microcenter gives those sway free; I don't ecpect to be buying another
one less than (nominal( 32 GB. But those aren't in the same time frame
as 1600 and 6250 tape drives; current tape drives are orders of
magnitude larger.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Morten Reistad
2014-08-03 09:42:49 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
And a memory stick holds hundreds of magtapes.
ITYM that a 2014 memory stick holds hundreds of 1974 magtapes. Todays
tape drives, at least on mainframes, are much larger.
Post by Morten Reistad
A 2400' tape at 1600 TPI holds 45000 kilobytes unformatted, about
22-24 megabytes with 2k blocks.
TPI?
Tracks Per Inch.
Post by Shmuel (Seymour J.) Metz
Figure 460 MB, with block sizes larger than 2KB being the norm.
2400 feet * 1600 tracks/inch * 12 inches/feet = 46 080 000 tracks
This is the figure you never will attain in practice.
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
This was the reference, "exchange mode" of tapes for a few
decades.
A decades and a half, two at most. There was still a lot of
instrumentation using 1600 BPI after that, but 6250 was the norm for
data exchange between mainframes.
And 3600 feet * 6250 tracks/inch * 12 inches/feet = 270 000 000 tracks
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
Now we find 8G thumb drives
Microcenter gives those sway free; I don't ecpect to be buying another
one less than (nominal( 32 GB. But those aren't in the same time frame
as 1600 and 6250 tape drives; current tape drives are orders of
magnitude larger.
No, I just wanted to inject some real data into the disussion.

-- mrr
Shmuel (Seymour J.) Metz
2014-08-03 13:15:23 UTC
Reply
Permalink
Post by Morten Reistad
Tracks Per Inch.
There were no 2400' open-reel magnetic tapes with 1600 tracks per
inch. There were tapes with 7 or 9 tracks[1] evenly distributed over a
.5" tape and recorded at 200, 556, 800, 1600 and 6250 BPI.
Post by Morten Reistad
2400 feet * 1600 tracks/inch
No such animal. The correct figure is 9 tracks.
Post by Morten Reistad
And 3600 feet * 6250 tracks/inch
Again, it's 9 tracks at 63250 BPI on each track.
Post by Morten Reistad
No, I just wanted to inject some real data into the disussion.
FSVO real.

[1] The 7-track tapes were written at 200, 556 or 80; the 9-track
tapes were written at 800, 1600 or 6250.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Christian Brunschen
2014-08-03 14:52:33 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
Tracks Per Inch.
There were no 2400' open-reel magnetic tapes with 1600 tracks per
inch. There were tapes with 7 or 9 tracks[1] evenly distributed over a
.5" tape and recorded at 200, 556, 800, 1600 and 6250 BPI.
Post by Morten Reistad
2400 feet * 1600 tracks/inch
No such animal. The correct figure is 9 tracks.
Tape track density appears to be measured in TPI, Tracks per Inch; bit
density in BPI, Bits per Inch. Have a look for instance at
<http://www.wtec.org/loyola/hdmem/final/ch4.pdf> . Also for instance
this description of a "1600 TPI" tape reader,
<http://www.computinghistory.org.uk/det/16968/Digi-Data-1600-tpi-tape-reader/>
.

So it seems there is such an animal.

// Christian
Osmium
2014-08-03 15:46:09 UTC
Reply
Permalink
Post by Christian Brunschen
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
Tracks Per Inch.
There were no 2400' open-reel magnetic tapes with 1600 tracks per
inch. There were tapes with 7 or 9 tracks[1] evenly distributed over a
.5" tape and recorded at 200, 556, 800, 1600 and 6250 BPI.
Post by Morten Reistad
2400 feet * 1600 tracks/inch
No such animal. The correct figure is 9 tracks.
Tape track density appears to be measured in TPI, Tracks per Inch; bit
density in BPI, Bits per Inch. Have a look for instance at
<http://www.wtec.org/loyola/hdmem/final/ch4.pdf> . Also for instance
this description of a "1600 TPI" tape reader,
<http://www.computinghistory.org.uk/det/16968/Digi-Data-1600-tpi-tape-reader/>
.
So it seems there is such an animal.
This thread has been bothering me for several days. I know a lot more about
tape than the average bear. Someone, somewhere has made a mistake, I am
not, and do not want to be, a historian and explain what went wrong..

The second reference above refers to tpi and no further substance. The first
reference is too long to read. It seems to be speculation of some sort and
he indicates if you could get a lot of tracks you could, as a result, get a
lot of data. So what? It didn't happen. Common computer tapes are 7 or 9
tracks on half inch tape. Video recorders have been used to store binary
data. This leads to helical tracks and funny units. But there is no 1600
tracks side by side anywhere on that tape.

I seriously doubt that anyone other than a "futurist" ever thought about
"tracks per inch" very much, it's a kind of stupid metric . In short,
speculation does not make it so.
Osmium
2014-08-03 15:50:47 UTC
Reply
Permalink
Post by Osmium
I seriously doubt that anyone other than a "futurist" ever thought about
"tracks per inch" very much, it's a kind of stupid metric . In short,
speculation does not make it so.
PS. I left out a very important point. One does not need any credentials
to make a web page or to be cited on the web.
Peter Flass
2014-08-03 19:39:26 UTC
Reply
Permalink
Post by Osmium
Post by Osmium
I seriously doubt that anyone other than a "futurist" ever thought about
"tracks per inch" very much, it's a kind of stupid metric . In short,
speculation does not make it so.
PS. I left out a very important point. One does not need any credentials
to make a web page or to be cited on the web.
That's both positive and negative. On the one hand anyone with something
worthwhile to say can get "published" without going thru an entrenched
bureaucracy and/or censorship. On the other hand, any crackpot can have
his ideas put on a par with recognized experts.
--
Pete
Christian Brunschen
2014-08-03 20:02:55 UTC
Reply
Permalink
Post by Osmium
Post by Christian Brunschen
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
Tracks Per Inch.
There were no 2400' open-reel magnetic tapes with 1600 tracks per
inch. There were tapes with 7 or 9 tracks[1] evenly distributed over a
.5" tape and recorded at 200, 556, 800, 1600 and 6250 BPI.
Post by Morten Reistad
2400 feet * 1600 tracks/inch
No such animal. The correct figure is 9 tracks.
Tape track density appears to be measured in TPI, Tracks per Inch; bit
density in BPI, Bits per Inch. Have a look for instance at
<http://www.wtec.org/loyola/hdmem/final/ch4.pdf> . Also for instance
this description of a "1600 TPI" tape reader,
<http://www.computinghistory.org.uk/det/16968/Digi-Data-1600-tpi-tape-reader/>
.
So it seems there is such an animal.
This thread has been bothering me for several days. I know a lot more about
tape than the average bear. Someone, somewhere has made a mistake, I am
not, and do not want to be, a historian and explain what went wrong..
The second reference above refers to tpi and no further substance.
True, but it suggests that 'TPI' was at least a term that was in use,
whether for good reasons or bad ones.
Post by Osmium
The first
reference is too long to read.
Just open it and search for 'TPI', and there's a table of "tape
density projections", talking about both bit-density (kbpi) and track
density (tpi), as distinct and separate things.

I found both of these just by searching for "tape density TPI"; and it
simply turns out that yes, people were using that term.
Post by Osmium
It seems to be speculation of some sort
It appears to be, according to
<http://www.wtec.org/loyola/hdmem/final/toc.htm>, a "WTEC Panel Report
on The Future of Data Storage Technologies" (Sponsored by the National
Science Foundation, Defense Advanced Research Projects Agency and
National Institute of Standards and Technology of the United States
government). *shrugs*
Post by Osmium
and
he indicates if you could get a lot of tracks you could, as a result, get a
lot of data. So what? It didn't happen.
The point was simply that it appears that the term "tpi" for "tracks
per inch" was in fact in use, is all.
Post by Osmium
Common computer tapes are 7 or 9
tracks on half inch tape. Video recorders have been used to store binary
data. This leads to helical tracks and funny units. But there is no 1600
tracks side by side anywhere on that tape.
I seriously doubt that anyone other than a "futurist" ever thought about
"tracks per inch" very much, it's a kind of stupid metric . In short,
speculation does not make it so.
I don't think that chart was about being "futuristic", certainly not
in those parts that spoke about the past (the report is from 1999, and
talks about historical numbers from 1997 as well as projected ones for
the future).

And while it may not be particularly correct, the term "tpi" has been
seen in use (perhaps used somewhat interchangeably with "bpi")
elsewhere as well (see for instance
<http://www.sunhelp.org/pipermail/rescue/2004-April/101947.html>).

I'm not saying it's a correct term; just that a cursory internet
search indicates that the term seems to have been in used in the same
eay that Morten was using it.

// Christian
Osmium
2014-08-03 20:18:01 UTC
Reply
Permalink
Post by Christian Brunschen
Post by Osmium
Post by Christian Brunschen
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
Tracks Per Inch.
There were no 2400' open-reel magnetic tapes with 1600 tracks per
inch. There were tapes with 7 or 9 tracks[1] evenly distributed over a
.5" tape and recorded at 200, 556, 800, 1600 and 6250 BPI.
Post by Morten Reistad
2400 feet * 1600 tracks/inch
No such animal. The correct figure is 9 tracks.
Tape track density appears to be measured in TPI, Tracks per Inch; bit
density in BPI, Bits per Inch. Have a look for instance at
<http://www.wtec.org/loyola/hdmem/final/ch4.pdf> . Also for instance
this description of a "1600 TPI" tape reader,
<http://www.computinghistory.org.uk/det/16968/Digi-Data-1600-tpi-tape-reader/>
.
So it seems there is such an animal.
This thread has been bothering me for several days. I know a lot more about
tape than the average bear. Someone, somewhere has made a mistake, I am
not, and do not want to be, a historian and explain what went wrong..
The second reference above refers to tpi and no further substance.
True, but it suggests that 'TPI' was at least a term that was in use,
whether for good reasons or bad ones.
Post by Osmium
The first
reference is too long to read.
Just open it and search for 'TPI', and there's a table of "tape
density projections", talking about both bit-density (kbpi) and track
density (tpi), as distinct and separate things.
I found both of these just by searching for "tape density TPI"; and it
simply turns out that yes, people were using that term.
Post by Osmium
It seems to be speculation of some sort
It appears to be, according to
<http://www.wtec.org/loyola/hdmem/final/toc.htm>, a "WTEC Panel Report
on The Future of Data Storage Technologies" (Sponsored by the National
Science Foundation, Defense Advanced Research Projects Agency and
National Institute of Standards and Technology of the United States
government). *shrugs*
Post by Osmium
and
he indicates if you could get a lot of tracks you could, as a result, get a
lot of data. So what? It didn't happen.
The point was simply that it appears that the term "tpi" for "tracks
per inch" was in fact in use, is all.
Post by Osmium
Common computer tapes are 7 or 9
tracks on half inch tape. Video recorders have been used to store binary
data. This leads to helical tracks and funny units. But there is no 1600
tracks side by side anywhere on that tape.
I seriously doubt that anyone other than a "futurist" ever thought about
"tracks per inch" very much, it's a kind of stupid metric . In short,
speculation does not make it so.
I don't think that chart was about being "futuristic", certainly not
in those parts that spoke about the past (the report is from 1999, and
talks about historical numbers from 1997 as well as projected ones for
the future).
And while it may not be particularly correct, the term "tpi" has been
seen in use (perhaps used somewhat interchangeably with "bpi")
elsewhere as well (see for instance
<http://www.sunhelp.org/pipermail/rescue/2004-April/101947.html>).
I'm not saying it's a correct term; just that a cursory internet
search indicates that the term seems to have been in used in the same
eay that Morten was using it.
// Christian
Osmium
2014-08-03 20:32:00 UTC
Reply
Permalink
Post by Christian Brunschen
And while it may not be particularly correct, the term "tpi" has been
seen in use (perhaps used somewhat interchangeably with "bpi")
elsewhere as well (see for instance
<http://www.sunhelp.org/pipermail/rescue/2004-April/101947.html>).
I'm not saying it's a correct term; just that a cursory internet
search indicates that the term seems to have been in used in the same
eay that Morten was using it.
OK, that was short enough to read. I think you are confusing typing errors
and brain farts with what is going on. This guy clearly speaks of
nine-track and 6250 tpi. Now I happen to know that there was, in fact, a
common bit density of 6250 bpi used with nine-track tapes. Wouldn't it be
an odd coincidence if there were also a 6250 TPI nine-track tape?
Especially since 6250 is so much greater than 9? Most people, when pressed,
would probably say the nine-track tape was 18 TPI (tracks per inch).

Anyone who has not had a brain fart before hitting send is, almost by
definition, a newbie. If I can get down to five day or so, it has been a
very good day.
stoat
2014-08-04 05:56:25 UTC
Reply
Permalink
Post by Osmium
Post by Christian Brunschen
And while it may not be particularly correct, the term "tpi" has been
seen in use (perhaps used somewhat interchangeably with "bpi")
elsewhere as well (see for instance
<http://www.sunhelp.org/pipermail/rescue/2004-April/101947.html>).
I'm not saying it's a correct term; just that a cursory internet
search indicates that the term seems to have been in used in the same
eay that Morten was using it.
I suspect that tpi actually stands for transitions per inch,referring to
the encoding, and would more-or-less be equivalent to bits per inch.
Post by Osmium
OK, that was short enough to read. I think you are confusing typing errors
and brain farts with what is going on. This guy clearly speaks of
nine-track and 6250 tpi. Now I happen to know that there was, in fact, a
common bit density of 6250 bpi used with nine-track tapes. Wouldn't it be
an odd coincidence if there were also a 6250 TPI nine-track tape?
Especially since 6250 is so much greater than 9? Most people, when pressed,
would probably say the nine-track tape was 18 TPI (tracks per inch).
Anyone who has not had a brain fart before hitting send is, almost by
definition, a newbie. If I can get down to five day or so, it has been a
very good day.
--brian
--
Wellington
New Zealand
Rob Warnock
2014-08-04 13:48:49 UTC
Reply
Permalink
stoat <***@fake.org> wrote:
+---------------
| I suspect that tpi actually stands for transitions per inch,referring to
| the encoding, and would more-or-less be equivalent to bits per inch.
+---------------

Indeed. ISTR that early 7- & 9-track tapes were "556 BPI" and "800 BPI"
NRZI (Non-Return to Zero, Inverting), but when density on 9-track tapes
started going up vendors used fancier encodings -- PE (Phase Encoding),
GCR (Group Code Recording), etc. -- the terminology changed from using
"BPI" to "TPI" (as you suggest, meaning [magnetic flux] Transitions Per
Inch) and "FCI" (Flux Changes per Inch). IIRC, 1600 TPI used PE and
6250 FCI used GCR.

By using proper encoding (PE, RLL, GCR) that did not mandate flux changes
or transitions to occur on regular bit boundaries, it was possible to
encode a higher effective "user data bits per inch" than a naive count
of flux changes or transitions per inch would suggest, which may be why
they dropped the "BPI" usage.


-Rob

-----
Rob Warnock <***@rpw3.org>
627 26th Avenue <http://rpw3.org/>
San Mateo, CA 94403
Charlie Gibbs
2014-08-04 20:03:35 UTC
Reply
Permalink
Post by stoat
Post by Christian Brunschen
And while it may not be particularly correct, the term "tpi" has been
seen in use (perhaps used somewhat interchangeably with "bpi")
elsewhere as well (see for instance
<http://www.sunhelp.org/pipermail/rescue/2004-April/101947.html>).
I'm not saying it's a correct term; just that a cursory internet
search indicates that the term seems to have been in used in the same
eay that Morten was using it.
I suspect that tpi actually stands for transitions per inch,referring to
the encoding, and would more-or-less be equivalent to bits per inch.
That makes more sense than many of the things that have appeared in
this thread so far. On the other hand, it could be another of those
erroneous expressions that have unfortunately gained traction, like
"9600-baud modem" or "DB-9 serial connector".
--
/~\ ***@kltpzyxm.invalid (Charlie Gibbs)
\ / I'm really at ac.dekanfrus if you read it the right way.
X Top-posted messages will probably be ignored. See RFC1855.
/ \ HTML will DEFINITELY be ignored. Join the ASCII ribbon campaign!
Shmuel (Seymour J.) Metz
2014-08-03 19:50:37 UTC
Reply
Permalink
Post by Christian Brunschen
Tape track density appears to be measured in TPI, Tracks per Inch;
Water is wet; the open-reel tape drives had track densities on the
order of 18/inch, not 1600. 1600 was a common bit density for a few
years, until 6250 BPI came along.
Post by Christian Brunschen
Tape track density appears to be measured in TPI, Tracks per Inch;
bit density in BPI, Bits per Inch. Have a look for instance at
<http://www.wtec.org/loyola/hdmem/final/ch4.pdf> .
That document shows that in a track density of 750 tpi, quite short of
the claimed 1600 TPI. Tos say nothing of the fact that it is three
decades later than the tapes in question.
Post by Christian Brunschen
Also for instance this description of a "1600 TPI" tape reader,
<http://www.computinghistory.org.uk/det/16968/Digi-Data-1600-tpi-tape-reader/>
It's common for promotional literature to have substantial errors;
consult wikipedia (<http://en.wikipedia.org/wiki/9_track_tape>) or the
manual for any 9-track PE tape drive, e.g.,
<http://bitsavers.org/pdf/ibm/28xx/2803_2804/A22-6866-4_2400_Tape_Unit_2803_2804_Tape_Controls_Component_Description_Sep68.pdf>,
and see for yourself.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Ahem A Rivet's Shot
2014-08-04 07:09:09 UTC
Reply
Permalink
On Sun, 3 Aug 2014 14:52:33 +0000 (UTC)
Post by Christian Brunschen
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
Tracks Per Inch.
There were no 2400' open-reel magnetic tapes with 1600 tracks per
inch. There were tapes with 7 or 9 tracks[1] evenly distributed over a
.5" tape and recorded at 200, 556, 800, 1600 and 6250 BPI.
Post by Morten Reistad
2400 feet * 1600 tracks/inch
No such animal. The correct figure is 9 tracks.
Tape track density appears to be measured in TPI, Tracks per Inch; bit
density in BPI, Bits per Inch. Have a look for instance at
<http://www.wtec.org/loyola/hdmem/final/ch4.pdf> . Also for instance
this description of a "1600 TPI" tape reader,
<http://www.computinghistory.org.uk/det/16968/Digi-Data-1600-tpi-tape-reader/
If memory serves correctly (and it has been a long time) TPI stood
for Transitions Per Inch, a spatial analogue of baud, not Tracks Per Inch.
There has never been a tape with 1600 Tracks Per Inch, track density was
more like 18 tracks per inch (9 tracks on half inch tape).
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Osmium
2014-08-04 11:17:07 UTC
Reply
Permalink
Post by Ahem A Rivet's Shot
On Sun, 3 Aug 2014 14:52:33 +0000 (UTC)
Post by Christian Brunschen
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
Tracks Per Inch.
There were no 2400' open-reel magnetic tapes with 1600 tracks per
inch. There were tapes with 7 or 9 tracks[1] evenly distributed over a
.5" tape and recorded at 200, 556, 800, 1600 and 6250 BPI.
Post by Morten Reistad
2400 feet * 1600 tracks/inch
No such animal. The correct figure is 9 tracks.
Tape track density appears to be measured in TPI, Tracks per Inch; bit
density in BPI, Bits per Inch. Have a look for instance at
<http://www.wtec.org/loyola/hdmem/final/ch4.pdf> . Also for instance
this description of a "1600 TPI" tape reader,
<http://www.computinghistory.org.uk/det/16968/Digi-Data-1600-tpi-tape-reader/
If memory serves correctly (and it has been a long time) TPI stood
for Transitions Per Inch, a spatial analogue of baud, not Tracks Per Inch.
There has never been a tape with 1600 Tracks Per Inch, track density was
more like 18 tracks per inch (9 tracks on half inch tape).
Sure, that makes very good sense in thinking of change on ones recording. I
guess I got out of the tape biz before anyone invented that unfortunate
abbreviation.
Morten Reistad
2014-08-03 14:34:21 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
Tracks Per Inch.
There were no 2400' open-reel magnetic tapes with 1600 tracks per
inch. There were tapes with 7 or 9 tracks[1] evenly distributed over a
.5" tape and recorded at 200, 556, 800, 1600 and 6250 BPI.
Post by Morten Reistad
2400 feet * 1600 tracks/inch
No such animal. The correct figure is 9 tracks.
Ah, vocabulary; again.

My reference to TPI comes from the Prime computer glossies for
tape devices. BPI would be a better term, I agree.
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
And 3600 feet * 6250 tracks/inch
Again, it's 9 tracks at 63250 BPI on each track.
ITYM 6250
Post by Shmuel (Seymour J.) Metz
Post by Morten Reistad
No, I just wanted to inject some real data into the disussion.
FSVO real.
[1] The 7-track tapes were written at 200, 556 or 80; the 9-track
tapes were written at 800, 1600 or 6250.
I stand by the capacity figures, now in BPI for your convenience;

800 bytes/inch * 12 inches/feet * 2400 feet = 23 040 000 bytes
1600 bytes/inch * 12 inches/feet * 2400 feet = 46 080 000 bytes
6250 bytes/inch * 12 inches/feet * 3600 feet = 270 000 000 bytes

And for the 7-track drives, where I never saw any reel longer than
2400' (nor for 800 BPI 9-track drives either)

200 bytes/inch * 12 inches/feet * 2400 feet = 5 760 000 bytes
556 bytes/inch * 12 inches/feet * 2400 feet =16 012 900 bytes

Your 32G micro-SD card would take up the data from 5555 of those
200 BPI 7' tapes, even when storing sixbit per octet.

-- mrr
Shmuel (Seymour J.) Metz
2014-08-03 20:02:08 UTC
Reply
Permalink
Post by Morten Reistad
ITYM 6250
Yes.
Post by Morten Reistad
Your 32G micro-SD card
I don't have one. USB data keys are much more convenient. But my 8 GB
and 16 GB data keys hold a lot less than a 180 GB tape.
Post by Morten Reistad
would take up the data from 5555 of those 200 BPI 7' tapes,
Except that they weren't contemporaneous. A fair comparison is 2014
nonrotating storage to 2014 tapes.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Anne & Lynn Wheeler
2014-08-04 03:18:34 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
Except that they weren't contemporaneous. A fair comparison is 2014
nonrotating storage to 2014 tapes.
recent thread about SONY new tape over in ibm-main
http://www.garlic.com/~lynn/2014f.html#64 non-IBM: SONY new tape storage - 185 Terabytes on a tape
http://www.garlic.com/~lynn/2014f.html#65 non-IBM: SONY new tape storage - 185 Terabytes on a tape
http://www.garlic.com/~lynn/2014g.html#16 non-IBM: SONY new tape storage - 185 Terabytes on a tape
http://www.garlic.com/~lynn/2014g.html#75 non-IBM: SONY new tape storage - 185 Terabytes on a tape
http://www.garlic.com/~lynn/2014g.html#79 non-IBM: SONY new tape storage - 185 Terabytes on a tape

part of the issue in the thread were transfer rates of the new tape
generation and not announced for ibm mainframe (potentially because the
transfer rates were too high).

news items
http://www.extremetech.com/computing/181560-sony-develops-tech-for-185tb-tapes-3700-times-more-storage-than-a-blu-ray-disc
http://www.gizmag.com/sony-185-tb-magnetic-tape-storage/31910/
http://www.latimes.com/business/technology/la-fi-tn-sony-185-tb-cassette-tape-storage-record-20140505-story.html

posts in this thread:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#78 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#79 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#87 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#90 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#91 IBM Programmer Aptitude Test
--
virtualization experience starting Jan1968, online at home since Mar1970
Ahem A Rivet's Shot
2014-08-04 08:13:54 UTC
Reply
Permalink
On Sun, 03 Aug 2014 16:02:08 -0400
Post by Shmuel (Seymour J.) Metz
I don't have one. USB data keys are much more convenient. But my 8 GB
and 16 GB data keys hold a lot less than a 180 GB tape.
Those are old data keys, the biggest USB key I've seen to date is a
1TB USB 3.0 device it's expensive though at over €800, OTOH 128GB keys can
be had down to €35 (there's an enormous variation in price - some go for
over €100 for 128GB). I've started to see 128GB micro SD cards too which I
think is the highest density data storage currently available, much higher
density than 160GB tapes (although the tapes do have the edge in price).
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Shmuel (Seymour J.) Metz
2014-08-04 15:39:25 UTC
Reply
Permalink
Post by Ahem A Rivet's Shot
Those are old data keys,
Yes; as I wrote elsewhere, my local Microcenter is giving them out
free. I wouldn't buy one with less than (nominal) 32 GB, and it won't
be long before the price drops enough that I won't buy anything
smal;le than 64.
Post by Ahem A Rivet's Shot
the biggest USB key I've seen to date is a
1TB USB 3.0 device it's expensive though at over €800,
A lot more expensive than a 4 TB tape.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Peter Flass
2014-07-25 22:50:45 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Trying to back up everything on disk would be a huge waste of time
besides being useless for an actual restore. We needed to run our
daily application cycles to completion, then back up the application.
This would not necessarily take place all at once. Each application
got backed up.
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
common failure mode was single disk failure. because of scatter
allocation ... all files could have pieces across all disks ... a
single disk failure resulted in impacting *ALL* files ... required
restoring everything from scratch just to get a running system ... all
system files and all user files (nothing could be salvaged from
non-failed disks since arbitrary file pieces would be missing).
guy that i sometimes worked with when I got to play disk engineer
over in bldgs 14/15
http://www.garlic.com/~lynn/subtopic.html#disk
filed original patent for raid in 1977
http://en.wikipedia.org/wiki/RAID
i never actually operated s/38 ... but was told several times that the
operational restore problems for s/38 with single disk failure was
sufficiently traumatic that it motivated s/38 to ship the first raid
support.
I understand Multics had this problem originally, and they eventually
redesigned the filesystem to fix it.
--
Pete
Anne & Lynn Wheeler
2014-07-25 23:51:40 UTC
Reply
Permalink
Post by Peter Flass
I understand Multics had this problem originally, and they eventually
redesigned the filesystem to fix it.
re:
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test

reference to cp67/cms crashing & restarting 27 times in single day ...
because of the crash and auto system restart (tech sq ... but across the
courtyard from 545):
http://www.multicians.org/thvv/360-67.html

(It is a tribute to the CP/CMS recovery system that we could get 27
crashes in in a single day; recovery was fast and automatic, on the
order of 4-5 minutes. Multics was also crashing quite often at that
time, but each crash took an hour to recover because we salvaged the
entire file system. This unfavorable comparison was one reason that the
Multics team began development of the New Storage System.)

... snip ...

i had done ascii/tty terminal support as undergraduate in the 60s which
was picked up and distributed as part of standard release. I had done a
one byte arithmetic hack (since no terminals supported more than 255
length). Down the road, harvard got some kind of new tty device (i think
plotter) that supported line lengths longer than 255 ... USL did quick
hack to make the length something like 1200 (or more?) ... but didn't
fix the one byte arithmetic ... so lengths were incorrectly calculated
resulting in the crashes.


Multics had problem with both salvaging filesystem after crash
(something like unix fsck or vm370 spool warm start w/o checkpoint
start) ... as well scatter allocation
http://www.multicians.org/nss.html

In the initial design of the Multics file system, disk addresses were
assigned in increasing order, as if all the drives of a given device
type made up one big disk. We didn't think a lot about this approach, it
was just the easiest. One consequence of this address policy was that
files tended to have their pages stored on multiple disk drives, and all
drives were utilized about equally on average.

... snip ...
--
virtualization experience starting Jan1968, online at home since Mar1970
Peter Flass
2014-07-25 22:50:46 UTC
Reply
Permalink
Post by Dan Espen
Post by Anne & Lynn Wheeler
the s/38 common filesystem pool scaled poorly ... just having to
save/restore all data as single integral whole, was barely tolerable
with a few disks ... but large mainframe system with 300 disks would
require days for the operation.
You've said this many times.
Makes no sense to me at all.
When we backed up our data on an IBM System 34,
we backed up the application data.
It didn't matter at all what volume the data was on.
Trying to back up everything on disk would be
a huge waste of time besides being useless for an
actual restore. We needed to run our daily application
cycles to completion, then back up the application.
This would not necessarily take place all at once.
Each application got backed up.
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
--
Pete
Dan Espen
2014-07-25 23:44:20 UTC
Reply
Permalink
Post by Peter Flass
Post by Dan Espen
Post by Anne & Lynn Wheeler
the s/38 common filesystem pool scaled poorly ... just having to
save/restore all data as single integral whole, was barely tolerable
with a few disks ... but large mainframe system with 300 disks would
require days for the operation.
You've said this many times.
Makes no sense to me at all.
When we backed up our data on an IBM System 34,
we backed up the application data.
It didn't matter at all what volume the data was on.
Trying to back up everything on disk would be
a huge waste of time besides being useless for an
actual restore. We needed to run our daily application
cycles to completion, then back up the application.
This would not necessarily take place all at once.
Each application got backed up.
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
Got me with that one:

IME - In My Experience

I'm still struggling to grasp all the implications.

Suffice it to say, my only experience with even worrying about
backup was with the System/34 system I designed/programmed/installed.

If you're datasets are scattered all over multiple volumes,
you need to back up all the volumes to have something useful.
You can't very well restore one volume if anything has happened
on the other volumes.

z/OS now supports single datasets with extents on multiple
volumes. I guess you have to be careful how you do that.
That must complicate the process.

I hope I never have to deal with the issue.
Sounds like a nightmare.
--
Dan Espen
Peter Flass
2014-07-26 11:23:50 UTC
Reply
Permalink
Post by Dan Espen
Post by Peter Flass
Post by Dan Espen
Post by Anne & Lynn Wheeler
the s/38 common filesystem pool scaled poorly ... just having to
save/restore all data as single integral whole, was barely tolerable
with a few disks ... but large mainframe system with 300 disks would
require days for the operation.
You've said this many times.
Makes no sense to me at all.
When we backed up our data on an IBM System 34,
we backed up the application data.
It didn't matter at all what volume the data was on.
Trying to back up everything on disk would be
a huge waste of time besides being useless for an
actual restore. We needed to run our daily application
cycles to completion, then back up the application.
This would not necessarily take place all at once.
Each application got backed up.
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
IME - In My Experience
I'm still struggling to grasp all the implications.
Physical copy copied a whole cylinder at a time, minimized seek time, and
wrote a single file to tape. Logical copy had to do a VTOC lookup for each
dataset to be copied, seek to the start, and then read each extent in order
with seeks for each; it usually wrote a separate tape file for each dataset
backed up.
Post by Dan Espen
Suffice it to say, my only experience with even worrying about
backup was with the System/34 system I designed/programmed/installed.
If you're datasets are scattered all over multiple volumes,
you need to back up all the volumes to have something useful.
You can't very well restore one volume if anything has happened
on the other volumes.
z/OS now supports single datasets with extents on multiple
volumes. I guess you have to be careful how you do that.
That must complicate the process.
I hope I never have to deal with the issue.
Sounds like a nightmare.
Normally you'd define a separate volume group (forget the correct term) for
various groups of datasets, in some logical organization, so you wouldn't
have a single dataset spread all over your DASD farm but maybe over two or
three packs.
--
Pete
jmfbahciv
2014-07-26 12:39:06 UTC
Reply
Permalink
Post by Dan Espen
Post by Peter Flass
Post by Dan Espen
Post by Anne & Lynn Wheeler
the s/38 common filesystem pool scaled poorly ... just having to
save/restore all data as single integral whole, was barely tolerable
with a few disks ... but large mainframe system with 300 disks would
require days for the operation.
You've said this many times.
Makes no sense to me at all.
When we backed up our data on an IBM System 34,
we backed up the application data.
It didn't matter at all what volume the data was on.
Trying to back up everything on disk would be
a huge waste of time besides being useless for an
actual restore. We needed to run our daily application
cycles to completion, then back up the application.
This would not necessarily take place all at once.
Each application got backed up.
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
IME - In My Experience
I'm still struggling to grasp all the implications.
Suffice it to say, my only experience with even worrying about
backup was with the System/34 system I designed/programmed/installed.
If you're datasets are scattered all over multiple volumes,
you need to back up all the volumes to have something useful.
You can't very well restore one volume if anything has happened
on the other volumes.
z/OS now supports single datasets with extents on multiple
volumes. I guess you have to be careful how you do that.
That must complicate the process.
I hope I never have to deal with the issue.
Sounds like a nightmare.
There are two kinds of backups: file-based backups and physical
disk backups. The one you designed was file-based and specific
to a "user". Operations, which needed to babysit and entire
system, had to have some way of backuping the system in case
the fit hit the shan. Note that a file system rarely crashed
but one physical disk did.

/BAH
Morten Reistad
2014-07-26 14:51:51 UTC
Reply
Permalink
Post by jmfbahciv
Post by Dan Espen
z/OS now supports single datasets with extents on multiple
volumes. I guess you have to be careful how you do that.
That must complicate the process.
I hope I never have to deal with the issue.
Sounds like a nightmare.
There are two kinds of backups: file-based backups and physical
disk backups. The one you designed was file-based and specific
to a "user". Operations, which needed to babysit and entire
system, had to have some way of backuping the system in case
the fit hit the shan. Note that a file system rarely crashed
but one physical disk did.
There is a third level. The i-node level, introduced by unix(?[1])
and embedded in posix and NFS.

Every file, directory etc. is an i-node. They have a number on
the device they reside on, and have some kind of type and may
have data content (or it may not, e.g. a soft link).

dump & restore handles these kinds of backups. They are an intermediate
level between a file system (tar, cpio etc) backup and a physical
level backup. They have advantages of both. Individual files can
be restored, and it restores the whole file system as it was, with
all the magic hard and soft links exactly as they were. (Whereas
tar/cpio etc. will restore two hard links to the same file as
two files, and will mangle some device nodes. Some versions also
save symbolic (not absolute, numeric) user&group names, which
may differ on a restored system.

dump also does not save unused blocks, not does restore need
to load them. The save is by i-node sequence, which is generally
quite linear on the disk, at least sufficiently to matter
substantially on the backup, and especially the restore times.

You will have to build the file system first, and the restored
file system must be upwardly compatible with the restored one.
I.e. efs2->efs4 will work,

The i-node file system design is kind of hard to wrap your head
around, but once you have you wonder why the other ones don't do
this.

-- mrr

[1] was this really introduced by unix? It is the first instance
I can find, but there may be others, pre ca 1972 instances.
jmfbahciv
2014-07-27 13:19:01 UTC
Reply
Permalink
Post by Morten Reistad
Post by jmfbahciv
Post by Dan Espen
z/OS now supports single datasets with extents on multiple
volumes. I guess you have to be careful how you do that.
That must complicate the process.
I hope I never have to deal with the issue.
Sounds like a nightmare.
There are two kinds of backups: file-based backups and physical
disk backups. The one you designed was file-based and specific
to a "user". Operations, which needed to babysit and entire
system, had to have some way of backuping the system in case
the fit hit the shan. Note that a file system rarely crashed
but one physical disk did.
There is a third level. The i-node level, introduced by unix(?[1])
and embedded in posix and NFS.
Every file, directory etc. is an i-node. They have a number on
the device they reside on, and have some kind of type and may
have data content (or it may not, e.g. a soft link).
dump & restore handles these kinds of backups. They are an intermediate
level between a file system (tar, cpio etc) backup and a physical
level backup. They have advantages of both. Individual files can
be restored, and it restores the whole file system as it was, with
all the magic hard and soft links exactly as they were. (Whereas
tar/cpio etc. will restore two hard links to the same file as
two files, and will mangle some device nodes. Some versions also
save symbolic (not absolute, numeric) user&group names, which
may differ on a restored system.
dump also does not save unused blocks, not does restore need
to load them. The save is by i-node sequence, which is generally
quite linear on the disk, at least sufficiently to matter
substantially on the backup, and especially the restore times.
You will have to build the file system first, and the restored
file system must be upwardly compatible with the restored one.
I.e. efs2->efs4 will work,
The i-node file system design is kind of hard to wrap your head
around, but once you have you wonder why the other ones don't do
this.
What other OSes had soft links between files?
Post by Morten Reistad
-- mrr
[1] was this really introduced by unix? It is the first instance
I can find, but there may be others, pre ca 1972 instances.
The inode is almost equivalent to the files which were the xxx.UFD,
xxx.MFD or xxx. SFD functionality on TOPS-10.

/BAH
Morten Reistad
2014-07-27 16:15:33 UTC
Reply
Permalink
Post by jmfbahciv
Post by Morten Reistad
The i-node file system design is kind of hard to wrap your head
around, but once you have you wonder why the other ones don't do
this.
What other OSes had soft links between files?
In 1972? I don't know. (btw, the soft link was a later add-on, the
first i-node system only had the hard link)
Post by jmfbahciv
Post by Morten Reistad
-- mrr
[1] was this really introduced by unix? It is the first instance
I can find, but there may be others, pre ca 1972 instances.
The inode is almost equivalent to the files which were the xxx.UFD,
xxx.MFD or xxx. SFD functionality on TOPS-10.
Whic tells me that you haven't understood the i-node design. It
is about layering the formatted storage and the file system as
distinct entities on top of each other. The tops10 file system
never had any such layering. Nor did tops20, multics[1], or
any of the early OSes. I don't know about the classics like ctss.

-- mrr

[1] unless you count the segments and segdirs as a middle layer.
jmfbahciv
2014-07-28 12:42:12 UTC
Reply
Permalink
Post by Morten Reistad
Post by jmfbahciv
Post by Morten Reistad
The i-node file system design is kind of hard to wrap your head
around, but once you have you wonder why the other ones don't do
this.
What other OSes had soft links between files?
In 1972? I don't know. (btw, the soft link was a later add-on, the
first i-node system only had the hard link)
The only things I can think of which had soft-links between files
is something like ISAM or .HGH and .LOW file pairs. GETSEGs were
set up in the code on TOPS-10; the info wasn't in the file directory
entry block.
Post by Morten Reistad
Post by jmfbahciv
Post by Morten Reistad
-- mrr
[1] was this really introduced by unix? It is the first instance
I can find, but there may be others, pre ca 1972 instances.
The inode is almost equivalent to the files which were the xxx.UFD,
xxx.MFD or xxx. SFD functionality on TOPS-10.
Whic tells me that you haven't understood the i-node design. It
is about layering the formatted storage and the file system as
distinct entities on top of each other. The tops10 file system
never had any such layering. Nor did tops20, multics[1], or
any of the early OSes. I don't know about the classics like ctss.
I was thinking about where the data to soft link files would have
to be stored and saved. Have you ever opened a xxx.?FD file?
The data in that "file" would also have to be saved during a
BACKUP which didn't mirror the physical disk. I'm NOT talking
about the subsequent functionality which the OS and users use.
Post by Morten Reistad
[1] unless you count the segments and segdirs as a middle layer.
Anne & Lynn Wheeler
2014-07-26 14:18:53 UTC
Reply
Permalink
Post by Dan Espen
z/OS now supports single datasets with extents on multiple
volumes. I guess you have to be careful how you do that.
That must complicate the process.
z/OS has an enormous list of issues.

it still only supports ckd disks ... which haven't been manufactored for
decades ... all being simulated on large (fixed-block) disk subsystems
that make extensive use of virtual volumes and raid (with hardware raid
responsible for masking single disk failures). the virtual simulated
3390 data organization may have little to do with the actual physical
layout on real disks.

the ckd disks simulated are some flavor of 3390 with some slight of hand
that supports max. size that tends to be small multiples of real 3390s
(but enormously smaller than than the real disks being used) ... 3390
3gb, 9gb, 27gb, and 54gb.

recent 3390 "model A" ... DS8000 release 4 LIC, configuration supports
3390 devices between 1 to 268,434,453 (simulated) 3390 cylinders (max
225tb), z/os v1r10 & v1r11 only supports up to 262,668 max 3390
(simulated) cylinders (223gb).

ds8870 ref
http://www-03.ibm.com/systems/storage/disk/ds8000/specifications.html

risc power7 processors, max. 1tbyte memory, up to 3,072TB disk
(supporting a variety of real industry standard disks).

I've posted recently about z196 max i/o benchmark that used 104 FICONS
to some number of (presumably ds8000) disk subsystems that got 2M
IOPS. FICON is a heavy-weight mainframe channel emulation layer built on
industry standard fibre channel that enormously reduces the throughput
of native FCS throughput. About the same time as the z196 benchmark
there was announcement of FCS for e5-2600 claiming over million IOPS
(two such FCS would have higher throughput than 104 FICON). z196 has
other issues, the claim is that max. i/o instruction SSCH/sec is 2.2M
with all system support processors (SSPs) running 100% cpu utilization
... but the recommendation for normal operation that SSPs utilization be
kept to 70% or less (1.5M SSCH/sec). posts mentioning FICON
http://www.garlic.com/~lynn/submisc.html#ficon

I haven't seen any published benchmarks for the current ec12 ... but
ec12 announcement material was it would have only 30% higher i/o
throughput than z196.

posts in thread:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
--
virtualization experience starting Jan1968, online at home since Mar1970
Shmuel (Seymour J.) Metz
2014-07-28 11:51:07 UTC
Reply
Permalink
z/OS now supports single datasets with extents on multiple volumes.
Now? Multivolume data sets have been around since OS/360. Are you
perhaps thinking of striped data sets?
That must complicate the process.
To some extent.
Sounds like a nightmare.
Not nearly as much as using floppies for backup. What's the emoticon
for runs away shrieking in disgust and terror?
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Dan Espen
2014-07-29 04:18:40 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
z/OS now supports single datasets with extents on multiple volumes.
Now? Multivolume data sets have been around since OS/360. Are you
perhaps thinking of striped data sets?
Yes I am.
Sorry again.
I guess I should look things up again before I post.

Extended format datasets are always multi-volume,
so I'm at least near the ballpark.
Post by Shmuel (Seymour J.) Metz
That must complicate the process.
To some extent.
I sure wouldn't want to deal with it.

Just took a look in ISMF.
I see where a storage class called striped but
I don't see the link to a volume group.

How many volume groups would a site need?
Post by Shmuel (Seymour J.) Metz
Sounds like a nightmare.
Not nearly as much as using floppies for backup.
Those magazines where a thing of beauty.
To be fair, we never had an I/O problem with the magazines.
You slide in your diskettes and leave them in so you don't
have to handle the floppies.

At our main site we had to back up all the application data
for 3 other System/34 plants running the same application.

We needed the space on all 10 diskettes at the main site.
Post by Shmuel (Seymour J.) Metz
What's the emoticon
for runs away shrieking in disgust and terror?
:( --!--?--*-->
--
Dan Espen
Shmuel (Seymour J.) Metz
2014-07-29 14:28:22 UTC
Reply
Permalink
Post by Dan Espen
I see where a storage class called striped
Isn't that in the data class?
Post by Dan Espen
How many volume groups would a site need?
There is no "one size fits all."
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Dan Espen
2014-07-30 00:29:13 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
Post by Dan Espen
I see where a storage class called striped
Isn't that in the data class?
No, I found it in "storage" class in ISMF.
I'm not actually sure our site uses any striped datasets.
We're a software development shop. We only set
up what our customers use, and sometimes not even then.
We don't run any "production".
Post by Shmuel (Seymour J.) Metz
Post by Dan Espen
How many volume groups would a site need?
There is no "one size fits all."
Sure, but if I'm trying to squeeze every ounce of throughput
out of a system, I'd want my striped datasets on as many volumes
as possible, but I wouldn't want volume copy to get overly
complicated.
--
Dan Espen
Peter Flass
2014-07-29 10:09:52 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
z/OS now supports single datasets with extents on multiple volumes.
Now? Multivolume data sets have been around since OS/360. Are you
perhaps thinking of striped data sets?
That must complicate the process.
To some extent.
Sounds like a nightmare.
Not nearly as much as using floppies for backup. What's the emoticon
for runs away shrieking in disgust and terror?
I forget what machine we were talking about, presumably not the AS/400, but
it would seem like there would have been a tape drive available that
someone didn't want to pay for.
--
Pete
Dan Espen
2014-07-29 13:08:01 UTC
Reply
Permalink
Post by Peter Flass
Post by Shmuel (Seymour J.) Metz
z/OS now supports single datasets with extents on multiple volumes.
Now? Multivolume data sets have been around since OS/360. Are you
perhaps thinking of striped data sets?
That must complicate the process.
To some extent.
Sounds like a nightmare.
Not nearly as much as using floppies for backup. What's the emoticon
for runs away shrieking in disgust and terror?
I forget what machine we were talking about, presumably not the AS/400, but
it would seem like there would have been a tape drive available that
someone didn't want to pay for.
IBM System/34.

Don't think so. I don't remember having that option.
This page doesn't mention tape:

http://en.wikipedia.org/wiki/IBM_System/34
--
Dan Espen
Peter Flass
2014-07-29 16:09:33 UTC
Reply
Permalink
Post by Dan Espen
Post by Peter Flass
Post by Shmuel (Seymour J.) Metz
z/OS now supports single datasets with extents on multiple volumes.
Now? Multivolume data sets have been around since OS/360. Are you
perhaps thinking of striped data sets?
That must complicate the process.
To some extent.
Sounds like a nightmare.
Not nearly as much as using floppies for backup. What's the emoticon
for runs away shrieking in disgust and terror?
I forget what machine we were talking about, presumably not the AS/400, but
it would seem like there would have been a tape drive available that
someone didn't want to pay for.
IBM System/34.
Don't think so. I don't remember having that option.
http://en.wikipedia.org/wiki/IBM_System/34
that's a surprise - at least not from IBM.

http://books.google.com/books?id=l9x1-BjfXSwC&lpg=PA70&ots=hV5VIzwc3j&dq=%22system%2F34%22%20tape&pg=PA70#v=onepage&q=%22system/34%22%20tape&f=false
--
Pete
Dan Espen
2014-07-29 16:30:11 UTC
Reply
Permalink
Post by Peter Flass
Post by Dan Espen
Post by Peter Flass
Post by Shmuel (Seymour J.) Metz
z/OS now supports single datasets with extents on multiple volumes.
Now? Multivolume data sets have been around since OS/360. Are you
perhaps thinking of striped data sets?
That must complicate the process.
To some extent.
Sounds like a nightmare.
Not nearly as much as using floppies for backup. What's the emoticon
for runs away shrieking in disgust and terror?
I forget what machine we were talking about, presumably not the AS/400, but
it would seem like there would have been a tape drive available that
someone didn't want to pay for.
IBM System/34.
Don't think so. I don't remember having that option.
http://en.wikipedia.org/wiki/IBM_System/34
that's a surprise - at least not from IBM.
http://books.google.com/books?id=l9x1-BjfXSwC&lpg=PA70&ots=hV5VIzwc3j&dq=%22system%2F34%22%20tape&pg=PA70#v=onepage&q=%22system/34%22%20tape&f=false
Interesting.

Reading the fine print, it attached to the communications port.
Probably used Bi-Sync. (All our systems communicated over
Bi-Sync.) You'd have to write your own backup programs but
I'd guess Mitron had some software.
We used RPG for Bi-Sync stuff. Pretty simple.

It wasn't until System/36 that tapes showed up according to
Wikipedia.
--
Dan Espen
Shmuel (Seymour J.) Metz
2014-07-29 14:31:30 UTC
Reply
Permalink
In
<1063584717428320977.257593peter_flass-***@news.eternal-september.org>,
on 07/29/2014
Post by Peter Flass
I forget what machine we were talking about,
S/34, which I believe is a successor to the S/3. Certainly on the
small side, but the idea of relying on floppies for backup leaves me
glad that it's not my dog.
--
Shmuel (Seymour J.) Metz, SysProg and JOAT <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action. I reserve the
right to publicly post or ridicule any abusive E-mail. Reply to
domain Patriot dot net user shmuel+news to contact me. Do not
reply to ***@library.lspace.org
Dan Espen
2014-07-30 00:24:34 UTC
Reply
Permalink
Post by Shmuel (Seymour J.) Metz
In
on 07/29/2014
Post by Peter Flass
I forget what machine we were talking about,
S/34, which I believe is a successor to the S/3. Certainly on the
small side, but the idea of relying on floppies for backup leaves me
glad that it's not my dog.
Yep, S/3 then S/32 then S/34.

In fact the application I developed was started by someone else
on a S/32. The machine was totally inadequate to the task and
was replaced before I started. A S/32 looks like a desk with
a tiny little screen on it.
--
Dan Espen
Anne & Lynn Wheeler
2014-07-26 15:23:17 UTC
Reply
Permalink
Post by Peter Flass
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
recent reference to having done cmsback in the late 70s
for internal installations
http://www.garlic.com/~lynn/2014i.html#58 How Comp-Sci went from passing fad to must have major
some old cmsback email
http://www.garlic.com/~lynn/lhwemail.html#cmsback
and some past posts
http://www.garlic.com/~lynn/submain.html#backup

it went through a couple internal releases and then had support for
client platforms ... and released as workstation datasave facility
(WDSF).

it did incremental new/changed file backup. internally it started out
being used for people that had accidentially erased/corrupted a file or
wanted an earlier version of a file. it then started being used to
reduce nightly full pack/drive backups to once a week. a single disk
failure would restore the most recent full pack/drive backup and then
restore latest more recent incremental new/changed files also on the
same disk.

it then morphed into ADSM (adstar storage manager) during period where
the disk division was reorganized and rebranded in prepartion for
spinning off into separate company. gerstner was then brought in ... and
he reversed the breakup
http://www.garlic.com/~lynn/submisc.html#gerstner

but then later sold-off the disk division anyway ... at which time some
amount of the disk division software was kept and moved into different
organization ... ADSM morphing into TSM

posts in this thread:
http://www.garlic.com/~lynn/2014i.html#40 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#44 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#47 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#48 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#52 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#54 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#55 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#69 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#70 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#73 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#74 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#76 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#78 IBM Programmer Aptitude Test
--
virtualization experience starting Jan1968, online at home since Mar1970
Alan Bowler
2014-10-23 21:32:42 UTC
Reply
Permalink
Post by Peter Flass
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
True, if your backing up everything, AND is the pack is near full,
then physical (image) backups of the pack are faster than
a full logical (by file) backup.

Some drawbacks:

1) it is even less time to do occasional full backups, with
more frequent incremental backups of what has changed.
2) of selected files is difficult to impossible from a tape
image of a full disk. In our experience, at Waterloo,
individual files needed to be restored because a user
fumble fingered something far more often that because
of disk failure
3) Hardware and software failures can and do cause damage
to file system structures that proceed to propagate, and
cause more damage (e.g. dual allocations).
If your image backup was done after the initial damage,
but before it was noticed, the restored files system
will again fall apart.
4) Image backups generally require that the pack(s),
and usually the system be offline during the backup
so partially you don't grab a image of file system
structures that are inconsistent (partially updated).
Logical backups can be done on a running system.
5) Logical backups can be restored to other disk
configurations.
Peter Flass
2014-10-24 11:29:50 UTC
Reply
Permalink
Post by Alan Bowler
Post by Peter Flass
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
True, if your backing up everything, AND is the pack is near full,
then physical (image) backups of the pack are faster than
a full logical (by file) backup.
1) it is even less time to do occasional full backups, with
more frequent incremental backups of what has changed.
This takes longer to restore a whole pack, as you have to process a number
of tapes, although as you say in point 2 this is less frequent than
restoring individual files.
Post by Alan Bowler
2) of selected files is difficult to impossible from a tape
image of a full disk. In our experience, at Waterloo,
individual files needed to be restored because a user
fumble fingered something far more often that because
of disk failure
3) Hardware and software failures can and do cause damage
to file system structures that proceed to propagate, and
cause more damage (e.g. dual allocations).
If your image backup was done after the initial damage,
but before it was noticed, the restored files system
will again fall apart.
I never observed this on zOS, because "file structures" are much simpler.
Post by Alan Bowler
4) Image backups generally require that the pack(s),
and usually the system be offline during the backup
so partially you don't grab a image of file system
structures that are inconsistent (partially updated).
Logical backups can be done on a running system.
IBM fixed this with "snapshot." The file is marked for backup and
subsequent writes go to alternate locations. You can then back up the
original file whenever you want and then free the hold and the flush the
old versions of the tracks that were updated.
Post by Alan Bowler
5) Logical backups can be restored to other disk
configurations.
--
Pete
Charles Richmond
2014-10-24 20:36:59 UTC
Reply
Permalink
Post by Peter Flass
Post by Alan Bowler
Post by Peter Flass
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
True, if your backing up everything, AND is the pack is near full,
then physical (image) backups of the pack are faster than
a full logical (by file) backup.
1) it is even less time to do occasional full backups, with
more frequent incremental backups of what has changed.
This takes longer to restore a whole pack, as you have to process a number
of tapes, although as you say in point 2 this is less frequent than
restoring individual files.
Once at a PPoE, I accidentally deleted the source file of my FORTRAN77
program. Unfortunately, I had typed the whole thing in earlier in the day,
so there was *no* tape backup of the file. Fortunately, I had saved the
compiler listing file. So I just wrote a little program that would edit the
crap off the compiler listing and re-create my source file. That beat the
heck out of typing the whole source again!!!
--
numerist at aquaporin4 dot com
Peter Flass
2014-10-25 12:05:50 UTC
Reply
Permalink
Post by Charles Richmond
Post by Peter Flass
Post by Alan Bowler
Post by Peter Flass
IME it was often quicker to back up a whole pack with physical backup
rather than several datasets with logical backup.
True, if your backing up everything, AND is the pack is near full,
then physical (image) backups of the pack are faster than
a full logical (by file) backup.
1) it is even less time to do occasional full backups, with
more frequent incremental backups of what has changed.
This takes longer to restore a whole pack, as you have to process a number
of tapes, although as you say in point 2 this is less frequent than
restoring individual files.
Once at a PPoE, I accidentally deleted the source file of my FORTRAN77
program. Unfortunately, I had typed the whole thing in earlier in the
day, so there was *no* tape backup of the file. Fortunately, I had saved
the compiler listing file. So I just wrote a little program that would
edit the crap off the compiler listing and re-create my source file.
That beat the heck out of typing the whole source again!!!
BTDT.
--
Pete
Anne & Lynn Wheeler
2014-08-04 17:48:19 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
the s/38 common filesystem pool scaled poorly ... just having to
save/restore all data as single integral whole, was barely tolerable
with a few disks ... but large mainframe system with 300 disks would
require days for the operation.
re:
http://www.garlic.com/~lynn/2014i.html#72 IBM Programmer Aptitude Test

one of the issues with IBM channels was big disk farms. total channel
run lengths were restricted to 200ft support 3330 800kbyte/sec transfer
(which includes hand-shake for every byte transferred).

to move to 3880 disk controller , the went to "data streaming" which
support multiple byte transfer per hand-shake ... this allowed extended
maximum channel length to 400ft and 3mbyte/sec transfer rates (however,
the slower processor in 3880, significantly increased latency for
command & control processing operations compared to 3830 controller).

some of the big datacenters would have processor in middle of room with
200ft channel radius out in every direction ... about 125k sq ft. area
for disk farm. some datacenters were constrained enough that they
started doing devices on multiple floors arrayed around the processor.

big datacenters also tended to have multiple processors in
"loosely-coupled" configuration ... 3330 disks could connect to two
different 3830 controllers with string switch and each 3830 controller
could have four channel interfaces (allowing disk to be access by eight
different channels/processors). center of disk farm would then be a
circle (rather than point) ... with overlapping radius ... limiting
max. disk farm physical area for connectivity to all processors.

3880 & datastreaming channel then extended the radius to 400ft (channel
run) or about 502k sq ft. area (twice the radius, four times the area)
containing disk farm. however, disk data density also went way up
... enormously increasing the total amount of data in some of these old
mainframe datacenters.

one of issues I've periodically mentioned doing channel extender and
fiber channel standard ... was moving the i/o program out to the remote
end to eliminate the end-to-end latency operations ... everything could
be continuously streamed, concurrently in both directions .... getting
aggregate, sustained data transfer much closer to media transfer rate.

recent posts mentioning 3830/3880 disk controlleres:
http://www.garlic.com/~lynn/2014c.html#88 Optimization, CPU time, and related issues
http://www.garlic.com/~lynn/2014d.html#90 Enterprise Cobol 5.1
http://www.garlic.com/~lynn/2014i.html#68 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014i.html#90 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#91 IBM Programmer Aptitude Test
http://www.garlic.com/~lynn/2014i.html#96 z/OS physical memory usage with multiple copies of same load module at different virtual addresses
http://www.garlic.com/~lynn/2014i.html#97 The SDS 92, its place in history?
http://www.garlic.com/~lynn/2014j.html#17 The SDS 92, its place in history?

posts mentioning FICON
http://www.garlic.com/~lynn/submisc.html#ficon

posts mentioning channel extender
http://www.garlic.com/~lynn/submisc.html#channel.extender
--
virtualization experience starting Jan1968, online at home since Mar1970
h***@bbs.cpcn.com
2014-07-25 01:48:41 UTC
Reply
Permalink
Post by Quadibloc
Well, the AS/400 and such did appear to include some of the features and philosophy associated with the Future System. So, while FS was too ambitious for its time, some of its basic ideas were sound enough to be worth keeping.
Having used the AS/400, I did not think much of the "single level store" concept. If the machine was lightly used it could work, but if the machine had heavy use performance was terrible because the single level store did not make efficient use of available resources. Kind of like the early days of virtual storage when the system would 'thrash' with too much paging to disk.

Ironically, _today_ in the Z world, we have _evolved_ to more of a single store world. This is because disk and core-memory have become so cheap that stuff that used to be put out to cheaper off line slow storage can now be affordably stored in high speed on line storage. Much of this is transparent to the application programmer, with the operating system or CICS automatically using fast resources when available.


As to "FS", the IBM System 360 history book has a lot of information on it.

On the surface, one could wonder why they didn't think more of "how" the whole thing was supposed to work with the technology available of the time; FS would require enormous overhead. But, in the early days of S/360, they weren't sure of how everything would work either, but eventually they got it all running (at a very high price in delays and sweat). So, maybe they figured FS would somehow work itself out, too.
h***@bbs.cpcn.com
2014-07-28 14:34:40 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
As to "FS", the IBM System 360 history book has a lot of information on it.
I checked the book last night and I strongly recommend it to anyone interested in FS. It describes the technical and marketing environment that inspired FS initial research and then large investment. It also describes in detail the layered approach of the FS architecture, something not quite well understood by the players and often changing; and the reasons for termination. There was too much detail to summarize here.

FS was killed because, in essence, they recognized (1) the advances in technology--cheap memory and powerful CPUs--were not advancing fast enough to make FS practical; (2) demand for S/370 products was stronger than expected, (3) 360-370 became the de-facto standard architecture for the industry, and (4) FS was extremely complicated and completion was seen too far away.
Anne & Lynn Wheeler
2014-07-28 15:50:22 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
FS was killed because, in essence, they recognized (1) the advances in
technology--cheap memory and powerful CPUs--were not advancing fast
enough to make FS practical; (2) demand for S/370 products was
stronger than expected, (3) 360-370 became the de-facto standard
architecture for the industry, and (4) FS was extremely complicated
and completion was seen too far away.
re:
http://www.garlic.com/~lynn/submain.html#futuresys

possible spun as favorably as possible ... modulo:

1) some amount of it hadn't even been specified ... just some high-level
ideas and then "where's the beaf" ... many areas were possibly years
away from finding whether they were even practical (as opposed to simply
lacking sufficiently advanced technology)

2) (ibm houston science center) simulation that showed a 370/195
application run on a FS machine made out of the same technology as
370/195, would have throughput of 370/145 (30 times slow-down). could
only be marketed to much less throughput sensitive market ... like s/38
(which wasn't even a 370 market).

3) FS internal politics were killing off 370 product activity, then the
lack of 370 products gave the 370 clone vendors a market foothold
(killing off internal competition left the market wide-open to external
competition)

4) acs-end describes executives killing off acs/360 because it would
advance the computer state of the art too fast and IBM would loose
control of the market (also mentions features from acs/360 not showing
up until more than 20yrs later in es/9000)
http://people.cs.clemson.edu/~mark/acs_end.html

combination would imply that they wanted enormous advances in cheap
memory and powerful CPUs ... but not necessarily available to users.

some more here:
http://www.jfsowa.com/computer/memo125.htm

including some discussion of Brooks "Mythical Man Month" ... and 3081
(370) in the 80s being made out of warmed over FS technology ... only
three times faster than 168 ... but required so much hardware that 16
168s could have been built (could build sixteen 168s for the same cost
of 3081 ... and have five times the throughput).

there was something similar earlier in the late 70s with 3033 and 4341
... 3033 also using warmed over FS technology. multiple 4341s had
aggregate higher throughput, lower cost, better price/performance,
smaller sq ft and smaller environmental footprint (in the
datacenters). they were also the leading edge of the distributed
computing tsunami ... large corporations buying hundreds at a time and
putting out in departmental areas.
--
virtualization experience starting Jan1968, online at home since Mar1970
Walter Bushell
2014-07-25 11:19:38 UTC
Reply
Permalink
Post by Quadibloc
Well, the AS/400 and such did appear to include some of the features and
philosophy associated with the Future System. So, while FS was too ambitious
for its time, some of its basic ideas were sound enough to be worth keeping.
"Parts of it were excellent."
--
Never attribute to stupidity that which can be explained by greed. Me.
h***@bbs.cpcn.com
2014-07-25 01:51:17 UTC
Reply
Permalink
The specs for FS were totally insane, for the technology available at the time (Motorola 10K ECL or any equivalent). So, should FS have been canceled as it could NEVER reach the goal, or kept alive, as it would have been a very powerful machine? Was it an all-out attempt to make a supercomputer which would sell maybe less than a dozen units? Or, was it the basis of the next generation of IBM mainframes?
FS was not a super-computer, but the basis for the next generation of IBM mainframes. It was to revolutionize the I.T. world as System/360 did.
Rod Speed
2014-07-19 20:04:27 UTC
Reply
Permalink
That is a 20 year old post you replied to.
Post by f***@dfwair.net
I took this test in 1962 when I was a senior in high school. I did well
enough to get two summer job offers - one from IBM and one from the Ford
Scientific Research Laboratory. I took the job with Ford because it
involved programming for real applications (the IBM job involved being an
assistant to a man who repaired accounting machines). The test evaluated
my ability to think in a logical manner and solve puzzles. While certainly
not comprehensive by today's standards, it did work fairly well from my
perspective. I ended up with a 40+ year career in software development.
Reading an old (1971) journal article, I saw a reference to the IBM Programmer
McNamara, W. J., & Hughes, J. L.
Manual for the revised programmer aptitude test.
White Plains, New York: IBM, 1969.
completion of number sequences, geometric paired comparisons, and word
problems similar to those in junior high school mathematics.
1. Has anyone out there actually taken this test?
2. How would I go about getting a copy?
Thanks in advance.
--
Paul Palmer
Kidder Hall 368
Oregon State University, Corvallis, Oregon 97331-4605
l***@gmail.com
2019-05-06 03:49:41 UTC
Reply
Permalink
I took what was likely a similar test, as part of a COBOL class I was taking at a Junior College, and was told that I did fairly well.
I took the IBM Aptitude Test twice, in 1969, for 2 different companies, and got virtually the same score the 2nd time as I did the first time.
The description is correct, 75 questions, 3 parts, math, word problems, and picture series.
I believe my score was 69, which, I'm told, was very good (but I've nothing to compare it to).
As far as I know, one has to go to an IBM site to have the test administered, and it's not available anywhere else.
Peter Flass
2019-05-06 18:44:01 UTC
Reply
Permalink
Post by l***@gmail.com
I took what was likely a similar test, as part of a COBOL class I was
taking at a Junior College, and was told that I did fairly well.
I took the IBM Aptitude Test twice, in 1969, for 2 different companies,
and got virtually the same score the 2nd time as I did the first time.
The description is correct, 75 questions, 3 parts, math, word problems, and picture series.
I believe my score was 69, which, I'm told, was very good (but I've
nothing to compare it to).
As far as I know, one has to go to an IBM site to have the test
administered, and it's not available anywhere else.
Do they still offer it? Programmer aptitude tests were all the rage for a
while in the late 60s and early 70s, but then it was decided they weren’t
that predictive of success in the real world. I can’t recall nw if I ever
took one.
--
Pete
JimP
2019-05-06 20:03:47 UTC
Reply
Permalink
Post by Peter Flass
Post by l***@gmail.com
I took what was likely a similar test, as part of a COBOL class I was
taking at a Junior College, and was told that I did fairly well.
I took the IBM Aptitude Test twice, in 1969, for 2 different companies,
and got virtually the same score the 2nd time as I did the first time.
The description is correct, 75 questions, 3 parts, math, word problems, and picture series.
I believe my score was 69, which, I'm told, was very good (but I've
nothing to compare it to).
As far as I know, one has to go to an IBM site to have the test
administered, and it's not available anywhere else.
Do they still offer it? Programmer aptitude tests were all the rage for a
while in the late 60s and early 70s, but then it was decided they weren’t
that predictive of success in the real world. I can’t recall nw if I ever
took one.
My last job before retirement we had to take a CompTIA A+ computer
tech set of two tests to prove we could fix computers. My boss asked
me what I thought of it. I didn't care for it.

--
Jim
h***@bbs.cpcn.com
2019-05-07 22:03:33 UTC
Reply
Permalink
Post by Peter Flass
Do they still offer it? Programmer aptitude tests were all the rage for a
while in the late 60s and early 70s, but then it was decided they weren’t
that predictive of success in the real world. I can’t recall nw if I ever
took one.
IMHO they were b/s. My employer used them for recruitment,
and performance on the job was NOT correlated to test results.

One frustration about some managements was that programmers
were not a 'one size fits all'. Management wanted programmers
to act like accounting clerks and be totally uniform and
compliant in personality and workstyle. But most programmers
weren't like that. While some style consistency in coding
was important to prevent unreadable code, most times
prissy management was counter productive.
Huge
2019-05-08 20:18:12 UTC
Reply
Permalink
Post by h***@bbs.cpcn.com
Post by Peter Flass
Do they still offer it? Programmer aptitude tests were all the rage for a
while in the late 60s and early 70s, but then it was decided they weren’t
that predictive of success in the real world. I can’t recall nw if I ever
took one.
IMHO they were b/s.
Me too. CDC gave me one and reckoned I was operator material. 41 years later I retired as an exceedingly well paid IT Security consultant.
--
I don't have an attitude problem. If you have a problem with my
attitude, that's your problem.
Dan Espen
2019-05-09 01:30:54 UTC
Reply
Permalink
Post by Huge
Post by h***@bbs.cpcn.com
Post by Peter Flass
Do they still offer it? Programmer aptitude tests were all the rage
for a while in the late 60s and early 70s, but then it was decided
they weren’t that predictive of success in the real world. I can’t
recall nw if I ever took one.
IMHO they were b/s.
Me too. CDC gave me one and reckoned I was operator material. 41 years
later I retired as an exceedingly well paid IT Security consultant.
I was working as an office clerk.
I went to a programming tech school at nights, and did well.
Then told my employer. They were looking for programmers so they
gave me that test.

They told me I failed, but wanted me to take it again.
So I took it again, I don't know how I did but they
transferred me into programming.

So, I was never clear on what happened with that test,
but if I failed once, the test was bad.
I usually did well at those kinds of tests.
--
Dan Espen
J. Clarke
2019-05-09 01:52:14 UTC
Reply
Permalink
Post by Dan Espen
Post by Huge
Post by h***@bbs.cpcn.com
Post by Peter Flass
Do they still offer it? Programmer aptitude tests were all the rage
for a while in the late 60s and early 70s, but then it was decided
they weren’t that predictive of success in the real world. I can’t
recall nw if I ever took one.
IMHO they were b/s.
Me too. CDC gave me one and reckoned I was operator material. 41 years
later I retired as an exceedingly well paid IT Security consultant.
I was working as an office clerk.
I went to a programming tech school at nights, and did well.
Then told my employer. They were looking for programmers so they
gave me that test.
They told me I failed, but wanted me to take it again.
So I took it again, I don't know how I did but they
transferred me into programming.
So, I was never clear on what happened with that test,
but if I failed once, the test was bad.
I usually did well at those kinds of tests.
My current employer was using a similar test 4 years ago. They've
since outsourced their HR and the HR contractor won't let them use it
unless they can prove that it correlates to job performance. Of
course the new HR contractor won't let us ask Fizz-Buzz of an
applicant for a programming position . . .
Dan Espen
2019-05-09 13:19:58 UTC
Reply
Permalink
Post by J. Clarke
Post by Dan Espen
Post by Huge
Post by h***@bbs.cpcn.com
Post by Peter Flass
Do they still offer it? Programmer aptitude tests were all the rage
for a while in the late 60s and early 70s, but then it was decided
they weren’t that predictive of success in the real world. I can’t
recall nw if I ever took one.
IMHO they were b/s.
Me too. CDC gave me one and reckoned I was operator material. 41 years
later I retired as an exceedingly well paid IT Security consultant.
I was working as an office clerk.
I went to a programming tech school at nights, and did well.
Then told my employer. They were looking for programmers so they
gave me that test.
They told me I failed, but wanted me to take it again.
So I took it again, I don't know how I did but they
transferred me into programming.
So, I was never clear on what happened with that test,
but if I failed once, the test was bad.
I usually did well at those kinds of tests.
My current employer was using a similar test 4 years ago. They've
since outsourced their HR and the HR contractor won't let them use it
unless they can prove that it correlates to job performance. Of
course the new HR contractor won't let us ask Fizz-Buzz of an
applicant for a programming position . . .
That's crazy.

When I had to do interviews, I'd show the applicant some code, ask them
to explain it. That seemed to sort out the technical ability pretty
quickly. Personality quirks, not so easy to detect.
--
Dan Espen
h***@bbs.cpcn.com
2019-05-09 18:37:08 UTC
Reply
Permalink
Post by Dan Espen
So, I was never clear on what happened with that test,
but if I failed once, the test was bad.
I usually did well at those kinds of tests.
There are different flavors of programming, e.g. application
vs. systems, which require different skillsets. An aptitude
test does not necessarily measure them properly. Further,
IMHO, the aptitude tests are some psychologists idea of
what makes a good programmer, rather than reality.

I don't know about now, but in the past many private firms
would have a psychologist interview management candidates
and maybe take a management aptitude test. That's even more
ridiculous. The skills of a manager vary greatly depending
on what is being managed, and the H/R types don't get it.
That is, the manager of say a warehouse is different than
the manager of programmers.

Loading...