Post by John LevineCPU speed, sure, but the point of a mainframe is that it has high performance
peripherals. A /145 could have up to four channels and could attach several
dozen disk drives.
These days one SSD holds more data than two dozen 2314 disks, but I wouldn't
think a Pi has particularly high I/O bandwidth.
raspberry Pi 4 specs and benchmarks (2 yrs ago)
https://magpi.raspberrypi.org/articles/raspberry-pi-4-specs-benchmarks
SoC: Broadcom BCM2711B0 quad-core A72 (ARMv8-A) 64-bit @ 1.5GHz
GPU: Broadcom VideoCore VI
Networking: 2.4GHz and 5GHz 802.11b/g/n/ac wireless LAN
RAM: 1GB, 2GB, or 4GB LPDDR4 SDRAM
Bluetooth: Bluetooth 5.0, Bluetooth Low Energy (BLE)
GPIO: 40-pin GPIO header, populated
Storage: microSD
Ports: 2x micro-HDMI 2.0, 3.5mm analogue audio-video jack, 2x USB 2.0,
2x USB 3.0, Gigabit Ethernet, Camera Serial Interface (CSI), Display Serial Interface (DSI)
Dimensions: 88mm x 58mm x 19.5mm, 46g
linpack mips 925MIPS, 748MIPS, 2037MIPS
memory bandwidth (1MB blocks r&w) 4129/sec, 4427/sec
USB storage thruput (megabytes/sec r&w) 353mbytes/sec, 323mbytes/sec
more details
https://en.wikipedia.org/wiki/Raspberry_Pi
best picks Pi microSD cards (32gbytes)
https://www.tomshardware.com/best-picks/raspberry-pi-microsd-cards
===
by comparison, 145 would be .3MIPS and 512kbyte memory,
2314 capacity 29mbytes ... need 34 2314s/gbyte or 340 2314s for
10gbytes
https://www.ibm.com/ibm/history/exhibits/storage/storage_2314.html
2314 disk rate 312kbytes/sec ... ignoring channel program overhead, disk
access, etc, assuming that all four 145 channels would continuously be doing
disk i/o transfer at sustained 312kbytes/sec ... that is theoritical
1.2mbytes/sec
trivia: after transferring to San Jose Research (bldg28), I got roped
into playing disk engineer part time (across the street in
bldg14&15). The 3830 controller for 3330 & 3350 disk drives was replaced
with 3880 controller for 3380 disk drives. While 3880 had special
hardware data path for handling 3380 3mbyte/sec transfer ... it had a
microprocessor that was significantly slower than 3830 for everything
else ... which drastically drove up channel busy overhead ... especially
for the channel program chatter latency between processor and
controller.
The 3090 folks had configured number of channels, assuming the 3880
would be similar to 3830 but handling 3mbyte data transfer ... when they
found out how bad the 3880 channel busy really was ... they realized
they would have to drastically increase the number of channels. The
channel number increase required an extra (very expensive) TCM (there
were jokes that the 3090 office was going to charge the 3880 office for
the increase in 3090 manufacturing cost). Eventually marketing respun
big increase in number of channels (to handle the half-duplex chatter
channel busy overhead) as how great all the 3090 channels were.
Other triva: in 1980, IBM STL (lab) was bursting at the seams and they
were moving 300 people from the IMS DBMS development group to and
offsite bldg with dataprocessing back to STL datacenter. The group had
tried "remote" 3270 terminal support and found the human factors totally
unacceptable. I get con'ed into doing channel-extender support so they
can put local channel connected 3270 controllers at the offsite bldg
(with no perceived difference in human factors offsite and in STL).
The hardware vendor tries to get IBM to release my support, but there
were some people in POK playing with some serial stuff that get it
vetoed (they were worried that if it was in the market, it would harder
to justify releasing their stuff). Then in 1988, I'm asked to help LLNL
standardize some serial stuff they are laying with ... which quickly
becomes fibre channel standard (including some stuff I had done in
1980), initially 1gbit (100mbyte) full-duplex (2gbit, aka 200mbyte,
aggregate)
In 1990, the POK people get their stuff released with ES/9000 as ESCON
(when it is already obsolete, around 17mbyte aggregate). Later some of
the POK people start playing with fibre channel standard and define a
heavy weight protocol that drastically cuts the native throughput which
is finally releaseed as FICON.
The latest published benchmarks I can find is "peak I/O" for z196 that
used 104 FICON (running over 104 fibre channel) to get 2M IOPS. About
the same time there was a fibre channel announced for E5-2600 blade
claiming over million IOPS, two such fibre channel getting higher
(native) throughput than 104 FICON running over 104 fibre channel).
--
virtualization experience starting Jan1968, online at home since Mar1970