Post by h***@bbs.cpcn.comI don't know the internals of the "Big Blue" machines vs. Z architecture.
My _guess_ is that Z is more intended to serve many users--thousands of
CICS terminals--while Big Blue is intended to focus on heavy number
crunching. I also guess that Big Blue can do math problems faster than Z.
z900, 16 processors, 2.5BIPS (156MIPS/proc), Dec2000
z990, 32 processors, 9BIPS, (281MIPS/proc), 2003
z9, 54 processors, 18BIPS (333MIPS/proc), July2005
z10, 64 processors, 30BIPS (469MIPS/proc), Feb2008
z196, 80 processors, 50BIPS (625MIPS/proc), Jul2010
EC12, 101 processors, 75BIPS (743MIPS/proc), Aug2012
z13 published refs is 30% move throughput than EC12 (or about 100MIPS)
with 40% more processors ... or about 710MIPS/proc
part of the issue is that memory latency when measured in count of
processor cycles ... is compareable to 60s disk access when measured in
count of 60s processor cycles.
earlier press is that half the per processor improvement from z10 to
z196 is introduction of features like out-of-order execution, branch
prediction, etc. that have been in other chips for decades ... aka
masking/compensation for increasing mismatch between memory latency and
processor speed. per processor z196 to ec12 has more features.
e5-2600v1 blade about concurrent with z196 ... 400-500+ BIPS (depending
on model). A e5-2600v3 blade is rated at 2.5 times a e5-2600v1 blade,
and e5-2600v4 blade is rated at 3.5 times a e5-2600v1 blade ... or over
1.5TIPS (single e5-2600v4 blade with processing power of fifteen
max. configured latest z13 mainframes?)
4341 was leading edge of distributed computing tsunami (large
corporations ordering hundreds at a time for placing out in departmental
areas) as well as datacenter 4341 clusters had much more processing
power and I/O capacity, much lower price, much less physical and
environmental footprint. At one point head of POK felt it was such a
threat to 3033, that he convinced corporate to cut allocation of
critical 4341 manufacturing component in half. Before 4341s first
shipped, I was con'ed into doing a benchmark on 4341 engineering machine
in disk product test lab (bldg. 15) for LLNL who was looking at getting
70 for compute farm (leading edge of new supercomputing and cloud
computing paradigm). some old email
http://www.garlic.com/~lynn/lhwemail.html#4341
past posts getting to play disk engineer in bldgs14&15
http://www.garlic.com/~lynn/subtopic.html#disk
in 1980, IBM STL was growing fast and had to move 300 people from the
IMS group to offsite building (with computer access back into the STL
datacenter). They looked at "remote" 3270 support ... but found the
human factors totally unacceptable. I got sucked into do channel
extension support for local channel attached 3270 controllers at the
remote building. Optimization with downloading channel programs to the
remote end for execution help eliminate enormous latency in channel
protocol chatter ... and they didn't notice between "local" 3270 channel
operation at the remote end ... and "local" 3270 channel operation in
STL. some past posts
http://www.garlic.com/~lynn/submisc.html#channel.extender
The hardware vendor tried to get IBM to release my support for the
channel extender ... but there was group in POK that objected ... they
were afraid that it would make it harder to justify getting some serial
stuff they were playing with, released.
In 1988, I'm asked to help LLNL get some serial stuff they had
standardized, which quickly morphs into fibre-channel standard
... including lots of stuff to minimize round-trip protocol chatter
latency.
Then the POK engineers (from 1980) finally get their stuff released as
ESCON with ES/9000 when it is already obsolete. some past posts
http://www.garlic.com/~lynn/submisc.html#escon
Later some POK engineers get involved in fibre-channel standard and
define a heavy-weight protocol that drastically reduces native
throughput, which is finally released as FICON. IBM publishes a "peak
i/o" benchmark for z196 that uses 104 FICON (over 104 fibre-channel)
getting 2M IOPS. About the same time, there is a fibre-channel announced
for e5-2600v1 blade claiming over 1M IOPS (two such fibre-channel have
higher throughput than 104 FICON running over 104 fibre-channel). some
past posts
http://www.garlic.com/~lynn/submisc.html#ficon
old posts referencing jan1992 meeting in Ellison's conference
room on (commercial/DBMS) cluster scaleup
http://www.garlic.com/~lynn/95.html#13
also was working with national labs (including LLNL) on cluster scaleup
for numeric intensive and filesystems ... some old email
http://www.garlic.com/~lynn/lhwemail.html#medusa
within a month of the ellison meeting, cluster scaleup is transferred,
we are told we can't work on anything with more than four processors and
it is announced as supercomputer, 17Feb1992 article announcement for
scientific and technical "only"
http://www.garlic.com/~lynn/2001n.html#6000clusters1
11May1992 article that national lab interest in cluster scaleup caught
company by "surprise" (modulo going back to 1979 and 4341 computer
farm/cluster)
http://www.garlic.com/~lynn/2001n.html#6000clusters2
recent posts mention e6-2600:
http://www.garlic.com/~lynn/2015.html#35 [CM] IBM releases Z13 Mainframe - looks like Batman
http://www.garlic.com/~lynn/2015.html#36 [CM] IBM releases Z13 Mainframe - looks like Batman
http://www.garlic.com/~lynn/2015.html#39 [CM] IBM releases Z13 Mainframe - looks like Batman
http://www.garlic.com/~lynn/2015.html#46 Why on Earth Is IBM Still Making Mainframes?
http://www.garlic.com/~lynn/2015.html#78 Is there an Inventory of the Inalled Mainframe Systems Worldwide
http://www.garlic.com/~lynn/2015.html#82 Is there an Inventory of the Installed Mainframe Systems Worldwide
http://www.garlic.com/~lynn/2015c.html#29 IBM Z13
http://www.garlic.com/~lynn/2015c.html#30 IBM Z13
http://www.garlic.com/~lynn/2015c.html#93 HONE Shutdown
http://www.garlic.com/~lynn/2015d.html#39 Remember 3277?
http://www.garlic.com/~lynn/2015e.html#14 Clone Controllers and Channel Extenders
http://www.garlic.com/~lynn/2015f.html#0 What are some of your thoughts on future of mainframe in terms of Big Data?
http://www.garlic.com/~lynn/2015f.html#5 Can you have a robust IT system that needs experts to run it?
http://www.garlic.com/~lynn/2015f.html#35 Moving to the Cloud
http://www.garlic.com/~lynn/2015f.html#93 Miniskirts and mainframes
http://www.garlic.com/~lynn/2015g.html#19 Linux Foundation Launches Open Mainframe Project
http://www.garlic.com/~lynn/2015g.html#42 20 Things Incoming College Freshmen Will Never Understand
http://www.garlic.com/~lynn/2015g.html#93 HP being sued, not by IBM.....yet!
http://www.garlic.com/~lynn/2015g.html#96 TCP joke
http://www.garlic.com/~lynn/2015h.html#2 More "ageing mainframe" (bad) press
http://www.garlic.com/~lynn/2015h.html#108 25 Years: How the Web began
http://www.garlic.com/~lynn/2015h.html#110 Is there a source for detailed, instruction-level performance info?
http://www.garlic.com/~lynn/2015h.html#114 Between CISC and RISC
http://www.garlic.com/~lynn/2016.html#15 Dilbert ... oh, you must work for IBM
http://www.garlic.com/~lynn/2016.html#19 Fibre Chanel Vs FICON
http://www.garlic.com/~lynn/2016b.html#23 IBM's 3033; "The Big One": IBM's 3033
--
virtualization experience starting Jan1968, online at home since Mar1970