Anne & Lynn Wheeler
2021-11-05 19:41:59 UTC
IBM CKD DASD and multi-track search
https://en.wikipedia.org/wiki/Count_key_data
I got brought in by the branch into datacenter for large national
grocery chain ... multiple 168s in loosely-configuration for different
regions. They were having enormous VS2 performance problems and all the
standard IBM experts had already been brought through. They had large
class room with large piles of performance activity nuumbers covering
student tables. I started leafing through and after 30mins noticed
during peak slowdown the aggregate activity across all systems for a
specific shared 3330 drive peaked at approx. seven/second (one or two
per second per system; most rule-of-thumb for 3330 would normally be
more like 30-40, not 7) .... it was the only thing that seemed to
correlate with peak slowdown. After a little more investigation they
said the shared 3330 drive had PDS library for all store controller
applications ... and my undergraduate days started to kick in.
I had taken 2hr intro to computers/fortran and within a year, the
univ. hired me fulltime to be responsible for IBM mainframe systems
... starting with OS9.5 MFT. The univ. had gotten 360/67 supposedly for
tss/360, to replace 709/1401 ... 709 ran tape->tape with 1401 front end,
handling tape<->unit record (student fortran jobs running less than
second). TSS/360 never quite came to production fruition ... so 360/67
was running as 360/65 with OS/360 ... and student fortran running over
minute. Installing HASP cut time for student fortran jobs in half (over
30 seconds). For OS11MFT, I started doing highly customized SYSGENs
... tearing apart STAGE2 systen and re-arraigning to optimize placement
of files and PDS members for arm seek and PDS directory multi-track
search ... cutting another 2/3rds to 12.9seconds for student fortran
... but still never beat 709 tape->tape until installed UofWaterloo
WATFOR.
Turns out the shared store controller PDS library 3330 (for all stores
in the country) had very large number of members and three cylinder PDS
directory. PDS application member load required on avg. of >1cyl
multitrack search ... i.e. two multi-track search I/Os plus the
seek/read to load the member ... or three I/Os. The peak 7/sec met that
the whole system was limited to just barely more than two store
controller application loads per second across all the stores in the
country ... i.e. approx. .48secs per application load. A full cylinder
PDS directory multi-track search takes 19 rotations or .317secs, a
member load is around .03secs .. so there was a 2nd PDS directory
multi-track search that avg. 0.133secs or on the avg. approx 7-8 track
rotations.
Once the PDS directory multi-track search was diagnosed they partitioned
the store controller library into multiple files on different drives and
then created a private (non-shared) set for each region system. trivia:
note that full cyl. (19track) multi-track search taking 1/3rd sec not
only locks up the drive, but also the controller (and all drives on that
controller for all systems using that controller) and channel.
I've periodically pontificated that IBM CKD DASD and features like
multi-track search were 60s trade-off for limited real-storage and ample
I/O capacity. However that trade-off started to invert by the
mid-70s. Also in 1983, I wrote that over a period of 15yrs, DASD
relative system throughput had declined by an order of magnitude (ten
times, aka DASD got 3-5 times faster, but systems got 40-50 times
faster). Some IBM disk division executive took exception and assigned
the division performance group to refute my claims. After a couple
weeks, they came back and basically said that I had slightly understated
the problem. Their analysis was respun for a presentation on how to
organize data on DASD to improve system throughput, presented in session
B874, at SHARE 63, 16Aug1984.
https://en.wikipedia.org/wiki/Count_key_data
I got brought in by the branch into datacenter for large national
grocery chain ... multiple 168s in loosely-configuration for different
regions. They were having enormous VS2 performance problems and all the
standard IBM experts had already been brought through. They had large
class room with large piles of performance activity nuumbers covering
student tables. I started leafing through and after 30mins noticed
during peak slowdown the aggregate activity across all systems for a
specific shared 3330 drive peaked at approx. seven/second (one or two
per second per system; most rule-of-thumb for 3330 would normally be
more like 30-40, not 7) .... it was the only thing that seemed to
correlate with peak slowdown. After a little more investigation they
said the shared 3330 drive had PDS library for all store controller
applications ... and my undergraduate days started to kick in.
I had taken 2hr intro to computers/fortran and within a year, the
univ. hired me fulltime to be responsible for IBM mainframe systems
... starting with OS9.5 MFT. The univ. had gotten 360/67 supposedly for
tss/360, to replace 709/1401 ... 709 ran tape->tape with 1401 front end,
handling tape<->unit record (student fortran jobs running less than
second). TSS/360 never quite came to production fruition ... so 360/67
was running as 360/65 with OS/360 ... and student fortran running over
minute. Installing HASP cut time for student fortran jobs in half (over
30 seconds). For OS11MFT, I started doing highly customized SYSGENs
... tearing apart STAGE2 systen and re-arraigning to optimize placement
of files and PDS members for arm seek and PDS directory multi-track
search ... cutting another 2/3rds to 12.9seconds for student fortran
... but still never beat 709 tape->tape until installed UofWaterloo
WATFOR.
Turns out the shared store controller PDS library 3330 (for all stores
in the country) had very large number of members and three cylinder PDS
directory. PDS application member load required on avg. of >1cyl
multitrack search ... i.e. two multi-track search I/Os plus the
seek/read to load the member ... or three I/Os. The peak 7/sec met that
the whole system was limited to just barely more than two store
controller application loads per second across all the stores in the
country ... i.e. approx. .48secs per application load. A full cylinder
PDS directory multi-track search takes 19 rotations or .317secs, a
member load is around .03secs .. so there was a 2nd PDS directory
multi-track search that avg. 0.133secs or on the avg. approx 7-8 track
rotations.
Once the PDS directory multi-track search was diagnosed they partitioned
the store controller library into multiple files on different drives and
then created a private (non-shared) set for each region system. trivia:
note that full cyl. (19track) multi-track search taking 1/3rd sec not
only locks up the drive, but also the controller (and all drives on that
controller for all systems using that controller) and channel.
I've periodically pontificated that IBM CKD DASD and features like
multi-track search were 60s trade-off for limited real-storage and ample
I/O capacity. However that trade-off started to invert by the
mid-70s. Also in 1983, I wrote that over a period of 15yrs, DASD
relative system throughput had declined by an order of magnitude (ten
times, aka DASD got 3-5 times faster, but systems got 40-50 times
faster). Some IBM disk division executive took exception and assigned
the division performance group to refute my claims. After a couple
weeks, they came back and basically said that I had slightly understated
the problem. Their analysis was respun for a presentation on how to
organize data on DASD to improve system throughput, presented in session
B874, at SHARE 63, 16Aug1984.
--
virtualization experience starting Jan1968, online at home since Mar1970
virtualization experience starting Jan1968, online at home since Mar1970