Post by Charles RichmondAt the college I attended, there was access to a PDP-10 from MCRC
(Medical Computing Resource Center, part of the Southwestern
Medical School in Dallas). The max recommended number of users was
64, but the site allowed 80 users at a time. It was s-o s-l-o-w
that working with it was impossible. (I did *not* try 3 am...)
Later our college got a DECsystem 20 and things were much better.
Few folks at the college knew how to use it, so it was *not*
loaded down.
re:
http://www.garlic.com/~lynn/2009l.html#8 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
http://www.garlic.com/~lynn/2009l.html#9 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
http://www.garlic.com/~lynn/2009l.html#10 August 7, 1944: today is the 65th Anniversary of the Birth of the Computer
I had done huge amount of pathlength optimization, general I/O thruput
optimization, page replacement algorithm optimization and page I/O
optimization for cp67.
I also did dynamic adaptive resource management ... frequently referred
to as fair share scheduling ... because the default policy was resource
fair share.
Grenoble science center had 360/67 similar to cambridge science center
machine ... but had 1mbyte of real storage instead of only 768kbytes
(Grenoble machine netted 50% more memory for paging ... after cp67
fixed storage requires ... than the cambridge machine).
At one point, Grenoble took cp67 and modified it to implement a
relatively straight forward "working set" dispatching ... pretty much as
described in the computer literature of the time ... and published an
article in ACM with some amount of detailed workload, performance and
thruput study.
It turned out that the (modified) Grenoble cp67 system with 35 users
(and 50% more real storage for paging), running nearly same workload,
got about the same thruput as the Cambridge cp67 system did with 80
users (and my dynamic adaptive resource management).
On the Cambridge system, trivial interactive response degraded much more
gracefully ... even under extremely heavy workloads (including during
extended periods of 100% cpu utilization, high i/o activty and high
paging i/o activity).
misc. past posts mentioning cambridge science center
http://www.garlic.com/~lynn/subtopic.html#545tech
misc. past posts mentioning grenoble science center:
http://www.garlic.com/~lynn/93.html#7 HELP: Algorithm for Working Sets (Virtual Memory)
http://www.garlic.com/~lynn/94.html#1 Multitasking question
http://www.garlic.com/~lynn/99.html#18 Old Computers
http://www.garlic.com/~lynn/2001h.html#26 TECO Critique
http://www.garlic.com/~lynn/2001l.html#6 mainframe question
http://www.garlic.com/~lynn/2002c.html#49 Swapper was Re: History of Login Names
http://www.garlic.com/~lynn/2002o.html#30 Computer History Exhibition, Grenoble France
http://www.garlic.com/~lynn/2002q.html#24 Vector display systems
http://www.garlic.com/~lynn/2003f.html#50 Alpha performance, why?
http://www.garlic.com/~lynn/2004.html#25 40th anniversary of IBM System/360 on 7 Apr 2004
http://www.garlic.com/~lynn/2004c.html#59 real multi-tasking, multi-programming
http://www.garlic.com/~lynn/2004g.html#13 Infiniband - practicalities for small clusters
http://www.garlic.com/~lynn/2004q.html#73 Athlon cache question
http://www.garlic.com/~lynn/2005d.html#37 Thou shalt have no other gods before the ANSI C standard
http://www.garlic.com/~lynn/2005d.html#48 Secure design
http://www.garlic.com/~lynn/2005f.html#47 Moving assembler programs above the line
http://www.garlic.com/~lynn/2005h.html#10 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005h.html#15 Exceptions at basic block boundaries
http://www.garlic.com/~lynn/2005n.html#23 Code density and performance?
http://www.garlic.com/~lynn/2006e.html#7 About TLB in lower-level caches
http://www.garlic.com/~lynn/2006e.html#37 The Pankian Metaphor
http://www.garlic.com/~lynn/2006f.html#0 using 3390 mod-9s
http://www.garlic.com/~lynn/2006i.html#31 virtual memory
http://www.garlic.com/~lynn/2006i.html#36 virtual memory
http://www.garlic.com/~lynn/2006i.html#37 virtual memory
http://www.garlic.com/~lynn/2006i.html#42 virtual memory
http://www.garlic.com/~lynn/2006j.html#1 virtual memory
http://www.garlic.com/~lynn/2006j.html#17 virtual memory
http://www.garlic.com/~lynn/2006j.html#25 virtual memory
http://www.garlic.com/~lynn/2006l.html#14 virtual memory
http://www.garlic.com/~lynn/2006o.html#11 Article on Painted Post, NY
http://www.garlic.com/~lynn/2006q.html#19 virtual memory
http://www.garlic.com/~lynn/2006q.html#21 virtual memory
http://www.garlic.com/~lynn/2006r.html#34 REAL memory column in SDSF
http://www.garlic.com/~lynn/2006w.html#46 The Future of CPUs: What's After Multi-Core?
http://www.garlic.com/~lynn/2007i.html#15 when was MMU virtualization first considered practical?
http://www.garlic.com/~lynn/2007s.html#5 Poster of computer hardware events?
http://www.garlic.com/~lynn/2007u.html#79 IBM Floating-point myths
http://www.garlic.com/~lynn/2007v.html#32 MTS memories
http://www.garlic.com/~lynn/2008c.html#65 No Glory for the PDP-15
http://www.garlic.com/~lynn/2008h.html#70 New test attempt
http://www.garlic.com/~lynn/2008h.html#79 Microsoft versus Digital Equipment Corporation
http://www.garlic.com/~lynn/2008r.html#21 What if the computers went back to the '70s too?
--
40+yrs virtualization experience (since Jan68), online at home since Mar1970