Discussion:
Holy wars of the past - how did they turn out?
Add Reply
Thomas Koenig
2021-02-04 17:15:51 UTC
Reply
Permalink
The proverbial computer holy war is probably big vs. little endian,
which has been pretty much decided in favor of little endian
by default or by Intel, although TCP/IP is big-endian.

Machine language vs. those CPU-time-wasting assemblers - made
obsolescent by high-level programming languages.

High-level vs. assembler: Hardly anybody does assembler any more.

Structured programming vs. goto - structured programming won.

RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.

VMS vs. Unix - decided by DEC's fate, and by Linux.

DECNET vs. TCP/IP: See above.

Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.

Everybody vs. Fortran: Hating FORTRAN become the very definition
of a computer scientist. They didn't notice that, since 1991,
it has become quite a modern programming language.

Others?
Jim Jackson
2021-02-04 17:44:34 UTC
Reply
Permalink
Post by Thomas Koenig
The proverbial computer holy war is probably big vs. little endian,
which has been pretty much decided in favor of little endian
by default or by Intel, although TCP/IP is big-endian.
Machine language vs. those CPU-time-wasting assemblers - made
obsolescent by high-level programming languages.
High-level vs. assembler: Hardly anybody does assembler any more.
Structured programming vs. goto - structured programming won.
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
VMS vs. Unix - decided by DEC's fate, and by Linux.
DECNET vs. TCP/IP: See above.
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Everybody vs. Fortran: Hating FORTRAN become the very definition
of a computer scientist. They didn't notice that, since 1991,
it has become quite a modern programming language.
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Ahem A Rivet's Shot
2021-02-04 18:24:28 UTC
Reply
Permalink
On Thu, 4 Feb 2021 17:44:34 -0000 (UTC)
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Ethernet over thicknet or co-ax did.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Jim Jackson
2021-02-04 18:47:27 UTC
Reply
Permalink
Post by Ahem A Rivet's Shot
On Thu, 4 Feb 2021 17:44:34 -0000 (UTC)
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Ethernet over thicknet or co-ax did.
Well not in my experience, which was extensive. The vampire taps were a
B*, as were fitting the N-type plugs, but thin/cheaper net sorted that.

Of cpourse as with any network sizing and planning and knowing what you
doing always helped.
gareth evans
2021-02-04 21:49:06 UTC
Reply
Permalink
Post by Jim Jackson
Post by Ahem A Rivet's Shot
On Thu, 4 Feb 2021 17:44:34 -0000 (UTC)
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Ethernet over thicknet or co-ax did.
Well not in my experience, which was extensive. The vampire taps were a
B*, as were fitting the N-type plugs, but thin/cheaper net sorted that.
The N plugs were named by a mathematician because there are N different
ways of assembling them, and all are wrong!

Actually N after the designer Neill, just as the C type were named
after Concelman, and they joined forces to create the BNC,
Bayonet-Neill-Concelman
Jim Jackson
2021-02-06 12:49:41 UTC
Reply
Permalink
Post by gareth evans
Post by Jim Jackson
Post by Ahem A Rivet's Shot
On Thu, 4 Feb 2021 17:44:34 -0000 (UTC)
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Ethernet over thicknet or co-ax did.
Well not in my experience, which was extensive. The vampire taps were a
B*, as were fitting the N-type plugs, but thin/cheaper net sorted that.
The N plugs were named by a mathematician because there are N different
ways of assembling them, and all are wrong!
:-)
Post by gareth evans
Actually N after the designer Neill, just as the C type were named
after Concelman, and they joined forces to create the BNC,
Bayonet-Neill-Concelman
Well one learns something everyday.

Thanks
J. Clarke
2021-02-04 20:34:33 UTC
Reply
Permalink
On Thu, 4 Feb 2021 17:44:34 -0000 (UTC), Jim Jackson
Post by Jim Jackson
Post by Thomas Koenig
The proverbial computer holy war is probably big vs. little endian,
which has been pretty much decided in favor of little endian
by default or by Intel, although TCP/IP is big-endian.
Machine language vs. those CPU-time-wasting assemblers - made
obsolescent by high-level programming languages.
High-level vs. assembler: Hardly anybody does assembler any more.
Structured programming vs. goto - structured programming won.
RISC vs. CISC: The really complex CISC-architectures died out.
What do you consider to be a "really complex CISC-architecture"?
Post by Jim Jackson
Post by Thomas Koenig
The difference is now less important with superscalar architectures.
VMS vs. Unix - decided by DEC's fate, and by Linux.
DECNET vs. TCP/IP: See above.
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Everybody vs. Fortran: Hating FORTRAN become the very definition
of a computer scientist. They didn't notice that, since 1991,
it has become quite a modern programming language.
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
John Levine
2021-02-04 21:08:25 UTC
Reply
Permalink
Post by J. Clarke
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
What do you consider to be a "really complex CISC-architecture"?
The usual example is VAX.

I'd say IBM zSeries is pretty CISC but it has a unique niche..
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
J. Clarke
2021-02-04 23:35:49 UTC
Reply
Permalink
Post by John Levine
Post by J. Clarke
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
What do you consider to be a "really complex CISC-architecture"?
The usual example is VAX.
I'd say IBM zSeries is pretty CISC but it has a unique niche..
You might want to compare those to Intel.

The instruction set reference for the VAX is a single chapter with 141
pages. The instruction set reference for Intel is three volumes with
more than 500 pages each.
Scott Lurndal
2021-02-05 00:18:08 UTC
Reply
Permalink
Post by J. Clarke
Post by John Levine
Post by J. Clarke
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
What do you consider to be a "really complex CISC-architecture"?
The usual example is VAX.
I'd say IBM zSeries is pretty CISC but it has a unique niche..
You might want to compare those to Intel.
The instruction set reference for the VAX is a single chapter with 141
pages. The instruction set reference for Intel is three volumes with
more than 500 pages each.
And the architecture reference manual for ARMv8 (soi disant RISC processor) is
pushing 9000 pages, 2200 pages are the instruction sets, and that doesn't
include the scalable vector extension instructions (hundreds of instructions).
a***@math.uni.wroc.pl
2021-02-06 22:32:55 UTC
Reply
Permalink
Post by J. Clarke
Post by John Levine
Post by J. Clarke
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
What do you consider to be a "really complex CISC-architecture"?
The usual example is VAX.
I'd say IBM zSeries is pretty CISC but it has a unique niche..
You might want to compare those to Intel.
The instruction set reference for the VAX is a single chapter with 141
pages. The instruction set reference for Intel is three volumes with
more than 500 pages each.
Such comparison completely misses the point. Important
design point for RISC that instructions should be
implementable to execute in one cycle using high clock
frequency. In 1980 that required drastic simplification
of instructions, now one can have more complexity and
still fit in one cycle. CPU designers formulated
several features deemed necessary for fast 1 IPC
implementation. This set of features became
religious definiton of RISC. RISC versus CICS
war died out mostly because due to advance in
manufacturing and CPU design several of religious
RISC featurs became almost irrelevant to CPU
speed.

VAX instructions do complex things, in particular
multiple memory refereces with interesting
addressing modes. That was impossible to implement
in one cycle using technology from 1990 (and probably
still is impossible). 360 and 386 and their descendants
are in fact not that far from RISC: there is plenty
of complex instructions of dubious utility. But core
instruction set consists of instructions having one
memory access which starting from around 1990 can be
implemented in single cycle. They have complex
instruction encoding which required extra chip
space compared to religious RISC. But in modern
chips instruction decoders are tiny compared to
other parts. Around 1995 AMD and Intel invented
a trick so that effective speed of instruction
decoders is very high and religious RISC has
little if any advantage over 386 (or 360) there.

To put this in historical context: I have translation
of computer architecture book from 1976 by Tanenbaum.
In this book Tanenbaum writes about implementing
very complex high-level style instructions using
microcode ("Cobol" machine, "Fortran" machine).
Tanenbaum was very positive about such machines
and advocated future design should be of this
sort. RISC movement was in competely different
direction, simplifing instruction set and eliminating
microcode. In a sense, RISC movement realised
that with moderate extra effort one could
turn former microcode engines into actually
useful and very fast processor. In this
sense RISC movement won: it seems that
nobody now uses microcode for speed-critical
instructions.
--
Waldek Hebisch
Robin Vowels
2021-02-07 09:04:12 UTC
Reply
Permalink
Post by J. Clarke
Post by John Levine
Post by J. Clarke
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
What do you consider to be a "really complex CISC-architecture"?
The usual example is VAX.
I'd say IBM zSeries is pretty CISC but it has a unique niche..
You might want to compare those to Intel.
The instruction set reference for the VAX is a single chapter with 141
pages. The instruction set reference for Intel is three volumes with
more than 500 pages each.
Such comparison completely misses the point. Important
design point for RISC that instructions should be
implementable to execute in one cycle using high clock
frequency.
.
The design of RISC machines was largely misguided as a
better method than CISC machines.
RISC was more suited to simple microprocessors, with limited
instruction sets.
A CISC instruction such as a memory move, or a translate
instruction, did a lot of work. To run at the same speed,
a RISC needed a clock rate about ten times faster
than CISC to achieve the same speed.
In other words, a programmer writing code for a RISC was
effectively writing microcode.
To be useful to an assembler programmer, a computer instruction
needed to do more work rather than less.
Looking back at first generation computers, we see that
array operations were possible in 1951 on Pilot ACE,
and on DEUCE (1955). These operations included memory
move, array addition, array subtraction, etc.
Such minimised the number of instructions needed to do
a given computation, as well as, of course, to reduce
execution time.
Such instructions did not seem important to designers
of second generation machines, with widespread use of
transistors.
More recently, computers implementing array operations
did not appear until the 1970s.
.
In 1980 that required drastic simplification
of instructions,
.
now one can have more complexity and
still fit in one cycle. CPU designers formulated
several features deemed necessary for fast 1 IPC
implementation. This set of features became
religious definiton of RISC. RISC versus CICS
war died out mostly because due to advance in
manufacturing and CPU design several of religious
RISC featurs became almost irrelevant to CPU
speed.
VAX instructions do complex things, in particular
multiple memory refereces with interesting
addressing modes. That was impossible to implement
in one cycle using technology from 1990 (and probably
still is impossible). 360 and 386 and their descendants
.
I disagree.
Most of the S/360 character instructions that move/compare/
translate/search character strings are a long way from RISC.
Floating-point instructions also are far from RISC, especially
multiplication and division. Even addition and subtraction can
require multiple steps for post-normalization. (One of the few
computational instructions that could have been have been
implemented in a RISC was Halve floating-point.
And then there are the decimal instrucuitons. Even addition and
subtraction require multiple steps (not to mention multiplication
and division. All these are CISC instructions.
In the integer instruction set, multiplication and division are CISC.
Instructions such as Test and Set are complex, and possibly the
loop control instructions BXLE and BXH.
.
there is plenty
of complex instructions of dubious utility. But core
instruction set consists of instructions having one
memory access which starting from around 1990 can be
implemented in single cycle. They have complex
instruction encoding which required extra chip
space compared to religious RISC. But in modern
chips instruction decoders are tiny compared to
other parts. Around 1995 AMD and Intel invented
a trick so that effective speed of instruction
decoders is very high and religious RISC has
little if any advantage over 386 (or 360) there.
To put this in historical context: I have translation
of computer architecture book from 1976 by Tanenbaum.
In this book Tanenbaum writes about implementing
very complex high-level style instructions using
microcode ("Cobol" machine, "Fortran" machine).
Tanenbaum was very positive about such machines
and advocated future design should be of this
sort. RISC movement was in competely different
direction, simplifing instruction set and eliminating
microcode. In a sense, RISC movement realised
that with moderate extra effort one could
turn former microcode engines into actually
useful and very fast processor.
.
The problem with RISC design is that one needs many more
instructions to do the same amount of work. Many more instructions
need to be fetched (compared to CISC), tying up the data bus at the
same time that data is being being fetched from / stored to memory.
.
In this
sense RISC movement won: it seems that
nobody now uses microcode for speed-critical
instructions.
John Levine
2021-02-07 18:31:16 UTC
Reply
Permalink
Post by Robin Vowels
Such comparison completely misses the point. Important
design point for RISC that instructions should be
implementable to execute in one cycle using high clock
frequency.
.
The design of RISC machines was largely misguided as a
better method than CISC machines.
RISC was more suited to simple microprocessors, with limited
instruction sets.
A CISC instruction such as a memory move, or a translate
instruction, did a lot of work. To run at the same speed,
a RISC needed a clock rate about ten times faster
than CISC to achieve the same speed. ...
RISC made sense at one point in the history of microprocessor
development, but it turned out to be a pretty short window. The Intel
486 and the RISCy i860 were designed at the same time and reports were
that the 860 was about twice as fast for CPU work. It had poorly
designed interrupts and some too-clever floating point ops. Float
instructions could expose the pipeline so each floating instruction
returned the result of the previous instruction, which I suppose was
fine if you were doing a lot of dot products.

Once transistor density got to the point where chips could decompose and pipeline
the instruction stream, the difference stopped being very interesting.

Some RISC ideas were just mistakes. The SPARC had register windows to
make subroutine call and return faster by avoiding register save and
restore. But that was because the compiler they were using, PCC,
didn't have very sophisticated register allocation. The contemporary
IBM 801 project's compiler PL.8, had the first graph coloring
allocator which reduced the save and restore work to the point that it
was not a big deal.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
a***@math.uni.wroc.pl
2021-02-08 05:00:02 UTC
Reply
Permalink
Post by Robin Vowels
Post by J. Clarke
Post by John Levine
Post by J. Clarke
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
What do you consider to be a "really complex CISC-architecture"?
The usual example is VAX.
I'd say IBM zSeries is pretty CISC but it has a unique niche..
You might want to compare those to Intel.
The instruction set reference for the VAX is a single chapter with 141
pages. The instruction set reference for Intel is three volumes with
more than 500 pages each.
Such comparison completely misses the point. Important
design point for RISC that instructions should be
implementable to execute in one cycle using high clock
frequency.
.
The design of RISC machines was largely misguided as a
better method than CISC machines.
RISC was more suited to simple microprocessors, with limited
instruction sets.
A CISC instruction such as a memory move, or a translate
instruction, did a lot of work. To run at the same speed,
a RISC needed a clock rate about ten times faster
than CISC to achieve the same speed.
What you write is extremaly misleading. RISC design was
based on observing actual running programs and taking
statistics of instructions use. RISC got rid of infrequently
used complex instructions, but it does not mean that
single RISC instruction only a little work. For
example autoincremet was frequent featue. In typical
program that marches trough an array RISC step would be
two instructions:

load with autoincrement
computing op

On i386 one could do this is similar way or use different
two instrucions:

compute with register indirect argument
increment address register

On early 360 only the second possibility was availble (of
course, each machine could also use longer sequences, but
I am interested in most efficient one).

SPARC and MIPS had register windows, conseqently procedure
entry and return did a lot of work in single instruction.
Unlike STM on 360 register window operation was done in
once clock. Later it turned out that procedure entry
and return while frequent is not frequent enough to
justify cost of hardware. Addionaly, with better compilers
RISC machine without register windows could do calls
only marginally slower than machine with register windows,
so gain was small and register windows went out of fashion.
But they nicely illustrate that single RISC instruction
could do a lot of work. The real question was which
instructions were important enough to allocate hardware
resources needed to do the work, and which were
unimportant and offered possibility of savings. Also,
part of RISC philosopy was that multicycle instructions
frequently can be split into seqence of single-cycle
ones. So while RISC may need more instructions for
given work, number of cycles was usually smaler than
for CISC. This is very visible comparing i386 and
RISC of comparable complexity: all i386 were multi-cycle
ones, frequently needing more than 3 inctructions,
RISC could do most (or all) in single cycle.
Post by Robin Vowels
In other words, a programmer writing code for a RISC was
effectively writing microcode.
To be useful to an assembler programmer, a computer instruction
needed to do more work rather than less.
Do you have any experience writing RISC assembler? I have
worked on compiler backend for ARM and have written few
thousends lines of ARM assembly. Several ARM routines
had _less_ instructions that routine performing equivalent
function on i386. On average ARM seem to require slightly
more instructions than i386, but probably of order of few
percent more. Compiled code for ARM is longer by about
30%, but the main reason is not number of intructions. Rather
ARM instructions (more precisely ARM 5) are all 32-bits.
i386 instructions on average tend to be shorter than
4 bytes, so this is one reason for shorter code. Other
is constants: one can put 32-bit constants directly into
i386 intructions, but on ARM only small constant can
be included in instruction while other need to go into
literal pool (the same happens on old 360).

While I did not write anything substantial in assembler
for other RISC-s I saw reasonably large samples of assembler
for MIPS, SPARC and HPPA and I can assure you that neither
requires much more instructions than CISC. I also compiled
(using GCC) program having about 24000 lines of C for
different architectures. Longest executables were s390
and SPARC (IIRC of order 240 kB object code), shortest i360
(of order 180 kB), HAPPA was sligthly larger tham i386
(IIRC something like 190 kB).
Post by Robin Vowels
Looking back at first generation computers, we see that
array operations were possible in 1951 on Pilot ACE,
and on DEUCE (1955). These operations included memory
move, array addition, array subtraction, etc.
Such minimised the number of instructions needed to do
a given computation, as well as, of course, to reduce
execution time.
Such instructions did not seem important to designers
of second generation machines, with widespread use of
transistors.
More recently, computers implementing array operations
did not appear until the 1970s.
.
In 1980 that required drastic simplification
of instructions,
.
now one can have more complexity and
still fit in one cycle. CPU designers formulated
several features deemed necessary for fast 1 IPC
implementation. This set of features became
religious definiton of RISC. RISC versus CICS
war died out mostly because due to advance in
manufacturing and CPU design several of religious
RISC featurs became almost irrelevant to CPU
speed.
VAX instructions do complex things, in particular
multiple memory refereces with interesting
addressing modes. That was impossible to implement
in one cycle using technology from 1990 (and probably
still is impossible). 360 and 386 and their descendants
.
I disagree.
Most of the S/360 character instructions that move/compare/
translate/search character strings are a long way from RISC.
Sure, S/360 has a lot of complex instructons. But most of
them are either system instructions or can be replaced by
seqences of simpler S/360 operations. In machine like
VAX almost all is complex intructions, if you remove then
machine probably would be useless. On S/360 if you want
fast code there is good chance that your program uses
mainly simple instructions (they are the fast ones).
Post by Robin Vowels
Floating-point instructions also are far from RISC, especially
multiplication and division.
Huh? Every RISC that I used had FPU.
Post by Robin Vowels
Even addition and subtraction can
require multiple steps for post-normalization.
Maybe you did not realize that RISC machines are pipelined?
FPU addition usually needs 2-3 pipeline stages, multiplication
may need between 2 and 5 (depending on actual machine).
On machine pipelined you may new operation every cycle, but
need to wait between providning arguments and using the
result. That is after issuing FPU instructons there must
be some other instructions (possibly another FPU instructins,
possible NOP if you have no useful work) before you may
use result. HPPA 712 had FPU multiply and add instruction
and could execute loads in the same cycle as FPU operation.
In effect 60 MHz HPPA could do 120 M flops per second
(usually it was less but I saw some small but real code
running at that speed).

Very early RISC-s had FPU as coprocessor that did FPU work
while main CPU simultanously executed integer instructions.
Clearly such coprocessor was much slower than later RISC,
but was not much different than coprocessors used on
comparable CISC. Of course, in early RISC times big mainframes
and supercomputers had better floating point speed than
RISC, but big machines had much more hardware and cost
was hundreds if not thousends times higher than RISC cost.
Post by Robin Vowels
(One of the few
computational instructions that could have been have been
implemented in a RISC was Halve floating-point.
And then there are the decimal instrucuitons. Even addition and
subtraction require multiple steps (not to mention multiplication
and division. All these are CISC instructions.
Packed decimal instructitons on 360 do not fly either, they
are multicycle instructions. On machines where timings
are published it is clear that they are done by microcode
loop. For example on 360-85 decimal additon costs slightly
more per byte than 32-bit addition. With hardware for decimal
step RISC subroutine could do them at comparable speed. Even
on RISC without decimal hardware decimal subroutine can
run at resonable speed. Angain, on 360-85 single step of
TR has time equal to 3 ADD-s, plus substantial setup time.
RISC subroutine can do that at comparable speed.
The same applies to string instructions: on RISC you
need subroutine but subroutine can be quite fast.

Anyway, I do not consider decimal instructitons as core
instructions. I know that they are widely used in IBM
shops. However, somebody wanting best speed would go
to binary: binary data is smaller so whan your main
data is on tapes or discs transfer of bianry data is
faster. The only reason for decimal is inital entry
(which even in 1965 was not the main computational
cost), printing (again printers were much slower than
CPU-s so converion cost was not important) and
(main reason) inertia. Granted, decimal make a lot
of sense for cards-only shop, but when RISC arrived
I think that punched card as main storage were obsolete
and uneconomical (but there were enough inertia that
apparently some institutions used card quite long).
Post by Robin Vowels
In the integer instruction set, multiplication and division are CISC.
Instructions such as Test and Set are complex, and possibly the
loop control instructions BXLE and BXH.
No. Early RISC skipped multiplication because at that time
one could not fit fast multiplier on the chip. But
pretty quickly chip techonology catched up and RISC chips
included multiplies. Similarly for other instructions,
main point if instruction is useful and can have fast
implementation. There is nothing RISC-y in loop control
instruction that simutaneously jumps and updates register.
Some RISCS have instructions of this sort. RISC
normally avoids instructions needing multiple memory
accesses, as most such instructions can be replaced
by sequences of simpler instructins. But pragmatic
RISC recognizes that atomics have to be done as one
instruction. You may call them CISC, but as long as
you can do them without microcode (just using hardwired
control) and you do not spoil pipeline structure
they are OK. Similarly with division: it is CISC-y,
but if divider does not blow up your transistor budget
and rest of chip stays RISC, then it is OK. Around
15 years ago chip techology advanced enough that
high end RISC-s included dividers. Currently, tiny
RISC (Cortex-M0) has multipler, but no divider.
Bigger (but still relativly small) chips have dividers.
Post by Robin Vowels
there is plenty
of complex instructions of dubious utility. But core
instruction set consists of instructions having one
memory access which starting from around 1990 can be
implemented in single cycle. They have complex
instruction encoding which required extra chip
space compared to religious RISC. But in modern
chips instruction decoders are tiny compared to
other parts. Around 1995 AMD and Intel invented
a trick so that effective speed of instruction
decoders is very high and religious RISC has
little if any advantage over 386 (or 360) there.
To put this in historical context: I have translation
of computer architecture book from 1976 by Tanenbaum.
In this book Tanenbaum writes about implementing
very complex high-level style instructions using
microcode ("Cobol" machine, "Fortran" machine).
Tanenbaum was very positive about such machines
and advocated future design should be of this
sort. RISC movement was in competely different
direction, simplifing instruction set and eliminating
microcode. In a sense, RISC movement realised
that with moderate extra effort one could
turn former microcode engines into actually
useful and very fast processor.
.
The problem with RISC design is that one needs many more
instructions to do the same amount of work. Many more instructions
need to be fetched (compared to CISC), tying up the data bus at the
same time that data is being being fetched from / stored to memory.
Most RISC-s have dedicated instruction fetch bus, separate
from data bus. So no problem with tying bus. Note that
all RISC-s that I used had caches, so buses were part of
CPU complex. Since wast majority of instructions comes
from cache, istruction fetch has limited impact on
main memory access (traffic between main memory and cache).
There is disadvantage: longer + more instructions mean that
RISC needs bigger cache or has lower hit rate for the
same cache size. This is important factor explaing why
i386 won: i386 made better use of caches than RISC.

Modern ARM in 32-bit mode offers option of mixed 16-bit
and 32-bit instructions -- this is example of RISC dropping
one of featurs that were claimed to be essential for RISC
(that is fixed length instructions).
--
Waldek Hebisch
Scott Lurndal
2021-02-07 16:31:07 UTC
Reply
Permalink
Post by a***@math.uni.wroc.pl
To put this in historical context: I have translation
of computer architecture book from 1976 by Tanenbaum.
In this book Tanenbaum writes about implementing
very complex high-level style instructions using
microcode ("Cobol" machine, "Fortran" machine).
Tanenbaum was very positive about such machines
and advocated future design should be of this
sort.
The Burroughs Small Systems used custom loadable
microcode for each language. The last systems,
the B1900 series, were still being used into the
early 2000's.
Thomas Koenig
2021-02-07 17:10:15 UTC
Reply
Permalink
Post by Scott Lurndal
Post by a***@math.uni.wroc.pl
To put this in historical context: I have translation
of computer architecture book from 1976 by Tanenbaum.
In this book Tanenbaum writes about implementing
very complex high-level style instructions using
microcode ("Cobol" machine, "Fortran" machine).
Tanenbaum was very positive about such machines
and advocated future design should be of this
sort.
The Burroughs Small Systems used custom loadable
microcode for each language. The last systems,
the B1900 series, were still being used into the
early 2000's.
The original idea of the 801 was to use it as a microcode core
for other machines. They used it as a channel processor on the
big iron systems later, which pretty much fits the description.
John Levine
2021-02-07 18:18:12 UTC
Reply
Permalink
Post by Thomas Koenig
The original idea of the 801 was to use it as a microcode core
for other machines. They used it as a channel processor on the
big iron systems later, which pretty much fits the description.
That was one use, but the point of the project was to figure out how far
down they could strip the S/360 architecture by using a very advanced
compiler. Pretty far, it turned out. PL.8 as its name suggests was a
subset of PL/I in which one could write all sorts of programs.

The 801 morphed into the ROMP chip used in the RT/PC and sort of indirectly
into the POWER series.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Anne & Lynn Wheeler
2021-02-08 01:17:15 UTC
Reply
Permalink
Post by Thomas Koenig
The original idea of the 801 was to use it as a microcode core
for other machines. They used it as a channel processor on the
big iron systems later, which pretty much fits the description.
I periodically claim that John Cocke's motivation was to go to the
opposite extreme to the complexity of the failed "Future System" design
(never announced or shipped, one of the final nails was an analysis of
porting a 370/195 application to FS machine made out of the fastest
available hardware, it would have throughput of 370/145 ... around
factor of 30 slow down).

Presentation from the 801/risc group late 76 or early 77 was that the
extreme lack of features in the hardware would be compensated by
compiler technology ... including 801 would have no hardware
protection domain and all instructions could executed directly
by application or libraries w/o needing supervisor calls, the cp.r
operating system would only load "correct" programs and the pl.8
compiler would only generate correct programs.

A pitch was made to convert the huge myriad and variety of different
internal microprocessors to 801/risc ... emulators used in low &
mid-range 370s (801/risc iliad chips), as/400 followon to s/38, i/o
controllers and i/o channel processors. The 801/risc ROMP chip was
originally going to be used for a Displaywriter followon (running cp.r)
but when that got killed, they decided to retarget to the unix
workstation market, they got the company that had done the AT&T unix
port of PC/IX to do one for ROMP ... and privilege/non-privilege
hardware state had to be added (for the unix system model).

trivia: all the low/mid range 370s emulation ran about ten native
instructions per 370 instructions ... not all that different that
existing 370 emulators that run on Intel platforms ... and it would be
all of these that convert to 801/risc Iliad chips. the (370) 4361/4381
followon to 4331/4341 was suppose to use 801/risc Iliad ... I helped
with white paper that showed that majority of 370 could now be
implemented directly in cisc chip silicon ... rather than emulation.
That and many other of the 1980-era 801/risc efforts floundered and then
found some number of 801/risc chip engineers leaving for other vendors.

more trivia, some info about failed FS earl/mid 70s, here
http://www.jfsowa.com/computer/memo125.htm
http://people.cs.clemson.edu/~mark/fs.html
--
virtualization experience starting Jan1968, online at home since Mar1970
John Levine
2021-02-08 02:42:15 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Presentation from the 801/risc group late 76 or early 77 was that the
extreme lack of features in the hardware would be compensated by
compiler technology ... including 801 would have no hardware
protection domain and all instructions could executed directly
by application or libraries w/o needing supervisor calls, the cp.r
operating system would only load "correct" programs and the pl.8
compiler would only generate correct programs.
That wasn't a new idea. The Burroughs B5500 series depended on the
compilers not generating insecure code.
Post by Anne & Lynn Wheeler
but when that got killed, they decided to retarget to the unix
workstation market, they got the company that had done the AT&T unix
port of PC/IX to do one for ROMP ...
Yeah, that was me. IBM provided a rather heavyweight extended virtual
machine and our code ran rather slowly on top of that. Someone else
did a native port of BSD which ran a lot faster.

R's,
John
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Anne & Lynn Wheeler
2021-02-08 04:06:34 UTC
Reply
Permalink
Post by John Levine
Yeah, that was me. IBM provided a rather heavyweight extended virtual
machine and our code ran rather slowly on top of that. Someone else
did a native port of BSD which ran a lot faster.
folklore is that they had these 200 pl.8 programmers (from displaywriter
project) that needed something to do ... the claim was that with their
801 & pl.8 knowledge they could quickly create an abstract virtual
machine greatly simplifying the unix port ... and the aggregate effort
for both them and you ... would be significantly less than you doing the
port directly.

Besides taking longer and running slower ... it also created a nightmare
for people doing their own new device drivers ... having to do one in
unix (AIX) and another in the virtual machine layer.

Palo Alto was working with USB on BSD port to IBM mainframe and with
UCLA port on port of their LOCUS to mainframe (they had it up and
running on ibm series/1)
https://en.wikipedia.org/wiki/LOCUS_(operating_system)

Then Palo Alto was redirected to do the BSD port to PC/RT (ROMP) (bare
machine) instead (comes out as AOS) ... they did it in enormous less
effort than just the Austin effort to create virtual machine.

Then Palo Alto also goes on to do LOCUS port to ibm mainframe and i386
(which ships as aix/370 and aix/386).
--
virtualization experience starting Jan1968, online at home since Mar1970
Andreas Kohlbach
2021-02-05 01:04:23 UTC
Reply
Permalink
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Yep. Token Ring is the future, as
says, although Robert
Metcalfe disagrees. But what does he know. ;-)
--
Andreas
Anne & Lynn Wheeler
2021-02-05 20:35:46 UTC
Reply
Permalink
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
IBM E&S center in Dallas turned out report comparing 16mbit token-ring
to ethernet ... my only explanation was that they used prototype 3mbit
ethernet before standard that included listen-before-transmit.

New IBM Almaden research was provisioned with extensive cat5 assuming
16mbit token-ring ... but found that ethernet had higher aggregate
network throughput, lower latency, and ethernet cards had higher
per card throughput

ACM SIGCOMM 8/16-19, 1988, V18N4 had extensive analysis of 10mbit
ethernet ... they demonstration 30station lan with low-level device
driver code in constant loop transmitting minimum size packets with
effective LAN throughput dropping off to 8mbit/sec. These were $69
10mbit ethernet cards.

$899 16mbit T/R microchannel cards were significantly kneecapped ... IBM
PC/RT had done their own at-bus 4mbit token-ring card and found it had
higher per card throughput than microchannel 16mbit cards (and were
forbidden from doing their own 16mbit T/R microchannel cards). The
communication group was fiercely fighting off client/server and
distributed computing and 16mbit T/R microchannel cards had design point
of 300+ stations sharing same LAN doing dumb terminal emulation.

Claimed motivation for development of T/R and CAT5 ... was that
the 3270 dumb terminals was 3270 coax was starting to exceed
bldg load weights ... i.e. each 3270 dumb terminal had coax
that ran from machine room for each 3270 (cable trays were
becoming massive) ... communication group was trying
to address the problem with CAT5 T/R lans going to IBM/PCs
emulating 3270 dumb terminals.
--
virtualization experience starting Jan1968, online at home since Mar1970
Bob Eager
2021-02-05 21:27:23 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
IBM E&S center in Dallas turned out report comparing 16mbit token-ring
to ethernet ... my only explanation was that they used prototype 3mbit
ethernet before standard that included listen-before-transmit.
The university where I worked had their first ring network in 1978. It
ended up with several linked rings until it was decommissioned in favour
of ethernet in about 1992.

It worked well, but it wasn't IBM.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
J. Clarke
2021-02-05 22:38:10 UTC
Reply
Permalink
On Fri, 05 Feb 2021 10:35:46 -1000, Anne & Lynn Wheeler
Post by Anne & Lynn Wheeler
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
IBM E&S center in Dallas turned out report comparing 16mbit token-ring
to ethernet ... my only explanation was that they used prototype 3mbit
ethernet before standard that included listen-before-transmit.
New IBM Almaden research was provisioned with extensive cat5 assuming
16mbit token-ring ... but found that ethernet had higher aggregate
network throughput, lower latency, and ethernet cards had higher
per card throughput
ACM SIGCOMM 8/16-19, 1988, V18N4 had extensive analysis of 10mbit
ethernet ... they demonstration 30station lan with low-level device
driver code in constant loop transmitting minimum size packets with
effective LAN throughput dropping off to 8mbit/sec. These were $69
10mbit ethernet cards.
$899 16mbit T/R microchannel cards were significantly kneecapped ... IBM
PC/RT had done their own at-bus 4mbit token-ring card and found it had
higher per card throughput than microchannel 16mbit cards (and were
forbidden from doing their own 16mbit T/R microchannel cards). The
communication group was fiercely fighting off client/server and
distributed computing and 16mbit T/R microchannel cards had design point
of 300+ stations sharing same LAN doing dumb terminal emulation.
Claimed motivation for development of T/R and CAT5 ... was that
the 3270 dumb terminals was 3270 coax was starting to exceed
bldg load weights ... i.e. each 3270 dumb terminal had coax
that ran from machine room for each 3270 (cable trays were
becoming massive) ... communication group was trying
to address the problem with CAT5 T/R lans going to IBM/PCs
emulating 3270 dumb terminals.
And the sad part is that other than it being wifi instead of T/R
that's _still_ the way the mainframe is accessed most of the time. I'm
considered a weirdo for "slowing myself down" by using Eclipse with
the CompuWare plugin. I admit, one of the old guys _can_ go through
three screens and type a keyword about as fast as I can right click
and do the same thing but he's been doing that particular sequence for
40 years.
Jim Jackson
2021-02-06 12:56:20 UTC
Reply
Permalink
Post by Anne & Lynn Wheeler
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Perhaps you thought I was being serious, while I was being sarcastic :-)
Having said that I loved the refs below...
Post by Anne & Lynn Wheeler
IBM E&S center in Dallas turned out report comparing 16mbit token-ring
to ethernet ... my only explanation was that they used prototype 3mbit
ethernet before standard that included listen-before-transmit.
New IBM Almaden research was provisioned with extensive cat5 assuming
16mbit token-ring ... but found that ethernet had higher aggregate
network throughput, lower latency, and ethernet cards had higher
per card throughput
ACM SIGCOMM 8/16-19, 1988, V18N4 had extensive analysis of 10mbit
ethernet ... they demonstration 30station lan with low-level device
driver code in constant loop transmitting minimum size packets with
effective LAN throughput dropping off to 8mbit/sec. These were $69
10mbit ethernet cards.
$899 16mbit T/R microchannel cards were significantly kneecapped ... IBM
PC/RT had done their own at-bus 4mbit token-ring card and found it had
higher per card throughput than microchannel 16mbit cards (and were
forbidden from doing their own 16mbit T/R microchannel cards). The
communication group was fiercely fighting off client/server and
distributed computing and 16mbit T/R microchannel cards had design point
of 300+ stations sharing same LAN doing dumb terminal emulation.
Claimed motivation for development of T/R and CAT5 ... was that
the 3270 dumb terminals was 3270 coax was starting to exceed
bldg load weights ... i.e. each 3270 dumb terminal had coax
that ran from machine room for each 3270 (cable trays were
becoming massive) ... communication group was trying
to address the problem with CAT5 T/R lans going to IBM/PCs
emulating 3270 dumb terminals.
Anne & Lynn Wheeler
2021-02-08 00:42:39 UTC
Reply
Permalink
Post by Jim Jackson
Perhaps you thought I was being serious, while I was being sarcastic :-)
Having said that I loved the refs below...
somewhat sensitive, the IBM communication group was constantly claiming
my examples were incorrect, disparaging our customer executive
presentations, all sorts of political dirty tricks. but it wasn't
just token-ring and ethernet

... we had been working with NSF director was suppose to get $20M to
interconnect the NSF supercomputer centers. Then congress cuts the
budget and some other things happen and eventually an RFP is released
(in part based on what we already had running) ... old archived (a.f.c)
post with 28Mar1986 preliminary release
http://www.garlic.com/~lynn/2002k.html#12
Internal politics prevent us from bidding, NSF director tries to help by
writing IBM a letter (copying CEO) with support from some other 3-letter
agencies ... but that just makes the internal politics worse (further
aggravated along the way with comments that what we already have running
is at least 5yrs ahead of all RFP responses). As regional networks
connected into the centers, it becomes the NSFNET backbone (precursor to
modern internet)
https://www.technologyreview.com/s/401444/grid-computing/

all during this period the IBM communication group was distributing all
sorts of fabricated claims and misinformation about SNA versus TCP/IP.
At one point somebody collects a bunch of their internal misinformation
email (quite a few higher up IBM communication group executives) and
forwards it to us.
--
virtualization experience starting Jan1968, online at home since Mar1970
Jorgen Grahn
2021-02-05 21:41:09 UTC
Reply
Permalink
On Thu, 2021-02-04, Jim Jackson wrote:
...
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Solved by redefining what "Ethernet" means, right?

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
John Levine
2021-02-05 22:39:46 UTC
Reply
Permalink
...
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Solved by redefining what "Ethernet" means, right? The studies claiming
it would collide to death were wrong. Someone else suggested they didn't
take into account listen-before-transmit.

No, the old DIX spec coax Ethernet worked fine under heavy load.

I agree that what we call Ethernet now has little in common with coax
Ethernet other than the framing, but the changes weren't because it
didn't work. It rapidly became clear that a star configuration with hubs
was easier to install and manage than a shared coax bus, and twisted pair
is a lot cheaper and easier to handle than either kind of coax, not to
mention now a lot faster.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Jim Jackson
2021-02-06 13:01:22 UTC
Reply
Permalink
Post by Jorgen Grahn
...
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
Solved by redefining what "Ethernet" means, right?
or you could say that Ethernet evolved more easily - essentially
nearly ditching collision domains with the introduction of multi-port bridges
- now known as switches :-).
Anssi Saari
2021-02-06 09:18:29 UTC
Reply
Permalink
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
I'm reminded of this Dilbert strip when I hear Token Ring:

Loading Image...
Jim Jackson
2021-02-06 17:09:54 UTC
Reply
Permalink
Post by Anssi Saari
Post by Jim Jackson
Ethernet v. IBM Token Ring - did you know that ethernet could never work
as it would collapse under moderately heavy loads?
http://www.englishforum.ch/attachments/forum-support/16664d1277901515-once-again-english-forum-saves-day-dilbert_tokenring.gif
absolutely hilarious !!!
Andreas Kohlbach
2021-02-04 18:17:37 UTC
Reply
Permalink
Post by Thomas Koenig
The proverbial computer holy war is probably big vs. little endian,
which has been pretty much decided in favor of little endian
by default or by Intel, although TCP/IP is big-endian.
Machine language vs. those CPU-time-wasting assemblers - made
obsolescent by high-level programming languages.
High-level vs. assembler: Hardly anybody does assembler any more.
Structured programming vs. goto - structured programming won.
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
VMS vs. Unix - decided by DEC's fate, and by Linux.
DECNET vs. TCP/IP: See above.
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Everybody vs. Fortran: Hating FORTRAN become the very definition
of a computer scientist. They didn't notice that, since 1991,
it has become quite a modern programming language.
Besides software there was Commodore VS Atari computers in the school
yard. And VHS VS Betamax (and Video 2000 in Europe) VCR formats elsewhere.
--
Andreas

PGP fingerprint 952B0A9F12C2FD6C9F7E68DAA9C2EA89D1A370E0
Joacim Melin
2021-02-07 19:33:13 UTC
Reply
Permalink
AK> On Thu, 4 Feb 2021 17:15:51 -0000 (UTC), Thomas Koenig wrote: >>
The proverbial computer holy war is probably big vs. little endian, >> which has been pretty much decided in favor of little endian >> by
default or by Intel, although TCP/IP is big-endian. >>
Machine language vs. those CPU-time-wasting assemblers - made >> obsolescent by high-level programming languages. >>
High-level vs. assembler: Hardly anybody does assembler any more. >>
Structured programming vs. goto - structured programming won. >>
RISC vs. CISC: The really complex CISC-architectures died out. >> The difference is now less important with superscalar architectures. >>
VMS vs. Unix - decided by DEC's fate, and by Linux. >>
DECNET vs. TCP/IP: See above.
Emacs vs. vi: vim has led to a resurgence of vi, and many people >> are using this even on Windows.
Everybody vs. Fortran: Hating FORTRAN become the very definition >> of a computer scientist. They didn't notice that, since 1991, >> it has
become quite a modern programming language.
AK> Besides software there was Commodore VS Atari computers in the school
AK> yard. And VHS VS Betamax (and Video 2000 in Europe) VCR formats
AK> elsewhere.

My cousins where a pretty well-to-do family who always the latest gadgets. However, they sucked at picking the right things to buy so they
ended up with Video 2000, MSX Computers, Philips VideoPac G7000 (game console, a re-branded Magnavox Odyssey 2 I belive). I remember being
envious of them in a way but later when I got my first own computer (VIC-20), around 1983 or so, I remember thinking how sad it was that they
spent all that money on stuff that never took off.
Anssi Saari
2021-02-04 19:07:40 UTC
Reply
Permalink
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
Might this be coming back with Apple's M1 showing how AArch64 can run
circles around x86-64 at lower power? Might get interesting if something
like that starts eating Intel's Xeon business. Or all x86 business.
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
Scott Lurndal
2021-02-04 19:19:07 UTC
Reply
Permalink
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Jorgen Grahn
2021-02-05 21:54:56 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Anssi Saari
2021-02-06 09:25:45 UTC
Reply
Permalink
Post by Jorgen Grahn
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I guess I haven't either. Or I don't really know what my colleagues use
since I've started in a new job during the pandemic... I remember I had
an interesting talk years ago, probably in the naughties, with a
stranger because I was wearing my Gnus T-Shirt. As I recall he
recognized the elisp code on the shirt as lisp but wasn't an Emacs user.
Anne & Lynn Wheeler
2021-02-08 00:49:58 UTC
Reply
Permalink
Post by Anssi Saari
I guess I haven't either. Or I don't really know what my colleagues use
since I've started in a new job during the pandemic... I remember I had
an interesting talk years ago, probably in the naughties, with a
stranger because I was wearing my Gnus T-Shirt. As I recall he
recognized the elisp code on the shirt as lisp but wasn't an Emacs user.
"t" my header (gnus for a couple decades)
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)
--
virtualization experience starting Jan1968, online at home since Mar1970
Thomas Koenig
2021-02-06 11:03:08 UTC
Reply
Permalink
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).

What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Dan Espen
2021-02-06 12:37:46 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Every time I've ever heard someone claim that Emacs lacks something
dozens of candidates surface. Emacs has folding-mode,
hide-show, oragami, yafolding. I don't use any of them but I'd be
surprised if there was some requirement you could clearly define that
you couldn't find.
--
Dan Espen
Thomas Koenig
2021-02-06 13:00:00 UTC
Reply
Permalink
Post by Dan Espen
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Every time I've ever heard someone claim that Emacs lacks something
dozens of candidates surface. Emacs has folding-mode,
hide-show, oragami, yafolding. I don't use any of them but I'd be
surprised if there was some requirement you could clearly define that
you couldn't find.
So, here's my requirement, two parts:

Take a C block delineated by curly braces, like

if (foo)
{
bar();
}
else
{
baz();
}

I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.

Second part: Have the same for other programming languages like Fortran with its

DO I=1,10
call bar(i)
END DO

syntax.

And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Dan Espen
2021-02-06 13:45:29 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Dan Espen
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Every time I've ever heard someone claim that Emacs lacks something
dozens of candidates surface. Emacs has folding-mode,
hide-show, oragami, yafolding. I don't use any of them but I'd be
surprised if there was some requirement you could clearly define that
you couldn't find.
Take a C block delineated by curly braces, like
if (foo)
{
bar();
}
else
{
baz();
}
So you want:

if (foo) ...

Since forward-sentence navigates from the if to the closing }
it would seem like there should be an already existing solution.
Post by Thomas Koenig
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i)
END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Since I don't use any of those modes I wouldn't know if there is an
answer. I just seen many instances of someone claiming Emacs can't do a
thing being solved.
--
Dan Espen
Peter Flass
2021-02-06 20:54:09 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Dan Espen
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Every time I've ever heard someone claim that Emacs lacks something
dozens of candidates surface. Emacs has folding-mode,
hide-show, oragami, yafolding. I don't use any of them but I'd be
surprised if there was some requirement you could clearly define that
you couldn't find.
Take a C block delineated by curly braces, like
if (foo)
{
bar();
}
else
{
baz();
}
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i)
END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
--
Pete
Dan Espen
2021-02-06 22:33:43 UTC
Reply
Permalink
Post by Peter Flass
Post by Thomas Koenig
Post by Dan Espen
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Every time I've ever heard someone claim that Emacs lacks something
dozens of candidates surface. Emacs has folding-mode,
hide-show, oragami, yafolding. I don't use any of them but I'd be
surprised if there was some requirement you could clearly define that
you couldn't find.
Take a C block delineated by curly braces, like
if (foo)
{
bar();
}
else
{
baz();
}
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i)
END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
Of course you'd need an edit macro to do it the way Emacs or vi might
and you still couldn't make ISPF syntax aware, maybe use indentation
in an edit macro to get close.

But hiding is definitely an ISPF strong point.

I had an edit macro called "G" that was:

ex all
find all <args>

G for grep.
--
Dan Espen
Thomas Koenig
2021-02-07 09:26:16 UTC
Reply
Permalink
Post by Peter Flass
Post by Thomas Koenig
Take a C block delineated by curly braces, like
if (foo)
{
bar();
}
else
{
baz();
}
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i)
END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
Does ISPF support any Fortran version that has not been outdated
for 30 years?
Dan Espen
2021-02-07 11:17:01 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Peter Flass
Post by Thomas Koenig
Take a C block delineated by curly braces, like
if (foo)
{
bar();
}
else
{
baz();
}
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i)
END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
Does ISPF support any Fortran version that has not been outdated
for 30 years?
Not sure what you think ISPF is.
Here were are talking about the ISPF editor.
The editor doesn't care too much about what language your are editing.
It's language support does include highlighting keywords but it does
that without really understanding the actual language syntax.

ISPF does have panels to invoke foreground and background compiles.
Those panels are so brain dead that I've never seen any shop make
use of them.
--
Dan Espen
Thomas Koenig
2021-02-07 11:22:53 UTC
Reply
Permalink
Post by Dan Espen
Post by Thomas Koenig
Post by Peter Flass
Post by Thomas Koenig
Take a C block delineated by curly braces, like
if (foo)
{
bar();
}
else
{
baz();
}
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i)
END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
Does ISPF support any Fortran version that has not been outdated
for 30 years?
Not sure what you think ISPF is.
I've used it.
Post by Dan Espen
Here were are talking about the ISPF editor.
Sure. I don't have access to IBM mainframes, so I presumably cannot
use it. I could use lspf, probably, which seems to be a clone.
However...
Post by Dan Espen
The editor doesn't care too much about what language your are editing.
It's language support does include highlighting keywords but it does
that without really understanding the actual language syntax.
Given that MVS is still stuck with Fortran 77 + extensions, the
chances of ISPF having the correct syntax highligting for anything
newer than Fortran 77 seem remote.
Dan Espen
2021-02-07 13:54:20 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Dan Espen
Post by Peter Flass
Post by Thomas Koenig
Take a C block delineated by curly braces, like
if (foo)
{
bar(); }
else
{
baz(); }
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i) END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
Does ISPF support any Fortran version that has not been outdated for
30 years?
Not sure what you think ISPF is.
I've used it.
Wasn't sure.
Post by Thomas Koenig
Post by Dan Espen
Here were are talking about the ISPF editor.
Sure. I don't have access to IBM mainframes, so I presumably cannot
use it. I could use lspf, probably, which seems to be a clone.
However...
A while back I saw someone announce open access to an IBM mainframe. I
logged on, was able to use edit and use other parts of ISPF. I don't
know if it's still around. Must have been 10 years ago.
Post by Thomas Koenig
Post by Dan Espen
The editor doesn't care too much about what language your are
editing. It's language support does include highlighting keywords
but it does that without really understanding the actual language
syntax.
Given that MVS is still stuck with Fortran 77 + extensions, the
chances of ISPF having the correct syntax highligting for anything
newer than Fortran 77 seem remote.
I don't know Fortran but wouldn't most of the keywords still be the
same?

I think I read recently that highlighting in ISPF is now user
customizable.
--
Dan Espen
Thomas Koenig
2021-02-07 15:21:38 UTC
Reply
Permalink
Post by Dan Espen
Post by Thomas Koenig
Given that MVS is still stuck with Fortran 77 + extensions, the
chances of ISPF having the correct syntax highligting for anything
newer than Fortran 77 seem remote.
I don't know Fortran but wouldn't most of the keywords still be the
same?
Fortran has no reserved keywords as such.

And also, a lot of the syntax is new since Fortran 90.
Post by Dan Espen
I think I read recently that highlighting in ISPF is now user
customizable.
OK.
Stoat
2021-02-07 21:24:05 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Dan Espen
Post by Thomas Koenig
Given that MVS is still stuck with Fortran 77 + extensions, the
chances of ISPF having the correct syntax highligting for anything
newer than Fortran 77 seem remote.
I don't know Fortran but wouldn't most of the keywords still be the
same?
Fortran has no reserved keywords as such.
And also, a lot of the syntax is new since Fortran 90.
This reminds me of Tony Hoare's 1982 comment:
“I don't know what the language of the year 2000 will look like, but I
know it will be called Fortran.”
Post by Thomas Koenig
Post by Dan Espen
I think I read recently that highlighting in ISPF is now user
customizable.
OK.
--brian
--
Wellington
New Zealand
Charlie Gibbs
2021-02-08 05:41:25 UTC
Reply
Permalink
Post by Stoat
Post by Thomas Koenig
Post by Dan Espen
Post by Thomas Koenig
Given that MVS is still stuck with Fortran 77 + extensions, the
chances of ISPF having the correct syntax highligting for anything
newer than Fortran 77 seem remote.
I don't know Fortran but wouldn't most of the keywords still be the
same?
Fortran has no reserved keywords as such.
And also, a lot of the syntax is new since Fortran 90.
“I don't know what the language of the year 2000 will look like, but I
know it will be called Fortran.”
"A Real Programmer can write FORTRAN in any language."
--
/~\ Charlie Gibbs | "Some of you may die,
\ / <***@kltpzyxm.invalid> | but it's a sacrifice
X I'm really at ac.dekanfrus | I'm willing to make."
/ \ if you read it the right way. | -- Lord Farquaad (Shrek)
J. Clarke
2021-02-08 10:55:45 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by Stoat
Post by Thomas Koenig
Post by Dan Espen
Post by Thomas Koenig
Given that MVS is still stuck with Fortran 77 + extensions, the
chances of ISPF having the correct syntax highligting for anything
newer than Fortran 77 seem remote.
I don't know Fortran but wouldn't most of the keywords still be the
same?
Fortran has no reserved keywords as such.
And also, a lot of the syntax is new since Fortran 90.
“I don't know what the language of the year 2000 will look like, but I
know it will be called Fortran.”
"A Real Programmer can write FORTRAN in any language."
Unfortunately this is true. I deal with, among other things, a
significant body of C transliterated from FORTRAN, arithmetic ifs and
computed gotoes and the whole nine yards. And nobody has ever
explained to me why it needed to be transliterated to C.
Ahem A Rivet's Shot
2021-02-08 11:26:23 UTC
Reply
Permalink
On Mon, 08 Feb 2021 05:55:45 -0500
Post by J. Clarke
Unfortunately this is true. I deal with, among other things, a
significant body of C transliterated from FORTRAN,
With f2c ?
Post by J. Clarke
arithmetic ifs and
computed gotoes and the whole nine yards. And nobody has ever
explained to me why it needed to be transliterated to C.
Probably someone noticed that C programmers were easier to find
than FORtRAN programmers.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Thomas Koenig
2021-02-08 12:26:33 UTC
Reply
Permalink
Post by Ahem A Rivet's Shot
On Mon, 08 Feb 2021 05:55:45 -0500
Post by J. Clarke
Unfortunately this is true. I deal with, among other things, a
significant body of C transliterated from FORTRAN,
With f2c ?
Post by J. Clarke
arithmetic ifs and
computed gotoes and the whole nine yards. And nobody has ever
explained to me why it needed to be transliterated to C.
Probably someone noticed that C programmers were easier to find
than FORtRAN programmers.
FORTRAN (pre-F90, even pre-F77) is a pretty small language. Anybody
who knows C should be able to read and modify it pretty easily.

The problem is more the lack of structure imposed by the lack of
many control structures pre-f77, and the lack of variable
declarations. Translating that code to C will only fix the
second shortcoming.

There was (is) a tool that is actually quite good at restructuring
pre-F77 Fortran code, the so-called "toolpack". It is rather
picky about extensions, though.

Robin Vowels
2021-02-08 04:40:56 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Dan Espen
Post by Thomas Koenig
Given that MVS is still stuck with Fortran 77 + extensions, the
chances of ISPF having the correct syntax highligting for anything
newer than Fortran 77 seem remote.
I don't know Fortran but wouldn't most of the keywords still be the
same?
.
Post by Thomas Koenig
Fortran has no reserved keywords as such.
.
That's not the answer to any question that the poster put.
.
Post by Thomas Koenig
And also, a lot of the syntax is new since Fortran 90.
.
indeed.
.
Post by Thomas Koenig
Post by Dan Espen
I think I read recently that highlighting in ISPF is now user
customizable.
OK.
J. Clarke
2021-02-07 15:30:24 UTC
Reply
Permalink
Post by Dan Espen
Post by Thomas Koenig
Post by Dan Espen
Post by Peter Flass
Post by Thomas Koenig
Take a C block delineated by curly braces, like
if (foo)
{
bar(); }
else
{
baz(); }
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i) END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
Does ISPF support any Fortran version that has not been outdated for
30 years?
Not sure what you think ISPF is.
I've used it.
Wasn't sure.
Post by Thomas Koenig
Post by Dan Espen
Here were are talking about the ISPF editor.
Sure. I don't have access to IBM mainframes, so I presumably cannot
use it. I could use lspf, probably, which seems to be a clone.
However...
A while back I saw someone announce open access to an IBM mainframe. I
logged on, was able to use edit and use other parts of ISPF. I don't
know if it's still around. Must have been 10 years ago.
<ttps://mainframenation.com/mainframe/how-to-get-a-mainframe-access/>
gives several legal options.

If you're doing this for personal use and don't mind using warez it's
possible to obtain a bootleg ADCD 1.13 that will boot on
Hercules--this is a full Z/OS developer kit. The bootleg has some
sort of configuration issue such that it does not go down clean and
won't re-ipl after a shutdown. This makes it more of a toy than a
tool, but it does have ISPF.

If you're well-heeled you can get a legal Z/OS for $900/yr.
Post by Dan Espen
Post by Thomas Koenig
Post by Dan Espen
The editor doesn't care too much about what language your are
editing. It's language support does include highlighting keywords
but it does that without really understanding the actual language
syntax.
Given that MVS is still stuck with Fortran 77 + extensions, the
chances of ISPF having the correct syntax highligting for anything
newer than Fortran 77 seem remote.
I don't know Fortran but wouldn't most of the keywords still be the
same?
I think I read recently that highlighting in ISPF is now user
customizable.
Ahem A Rivet's Shot
2021-02-07 16:56:49 UTC
Reply
Permalink
On Sun, 07 Feb 2021 10:30:24 -0500
Post by J. Clarke
If you're doing this for personal use and don't mind using warez it's
possible to obtain a bootleg ADCD 1.13 that will boot on
Hercules--this is a full Z/OS developer kit. The bootleg has some
sort of configuration issue such that it does not go down clean and
won't re-ipl after a shutdown.
So put it in a Docker image and use external storage for data,
never shut down just reload the image to reboot. Run a big swarm (say
emulate Lexis-Nexis) and watch your AWS bill explode!
Post by J. Clarke
This makes it more of a toy than a tool, but it does have ISPF.
It just needs a workaround :)
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Peter Flass
2021-02-08 00:47:10 UTC
Reply
Permalink
Post by Dan Espen
Post by Thomas Koenig
Post by Peter Flass
Post by Thomas Koenig
Take a C block delineated by curly braces, like
if (foo)
{
bar();
}
else
{
baz();
}
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10
call bar(i)
END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
Does ISPF support any Fortran version that has not been outdated
for 30 years?
Not sure what you think ISPF is.
Here were are talking about the ISPF editor.
The editor doesn't care too much about what language your are editing.
It's language support does include highlighting keywords but it does
that without really understanding the actual language syntax.
ISPF does have panels to invoke foreground and background compiles.
Those panels are so brain dead that I've never seen any shop make
use of them.
I was going to say we used them extensively, but now that I think back we
didn’t use them at all. It was simpler to have the program wrapped in a
line or so,of JCL and then just SUB it. I don’t think anyone ever used
foreground compilation, our batch was so fast.
--
Pete
Dan Espen
2021-02-08 01:33:48 UTC
Reply
Permalink
Post by Peter Flass
Post by Peter Flass
Post by Thomas Koenig
Take a C block delineated by curly braces, like
if (foo) { bar(); } else { baz(); }
I want to have a reasonably short command that I can apply to the
opening or closing curly brace of each of the blocks, and I want
to view them as one line indicating that something has been hidden.
Second part: Have the same for other programming languages like Fortran with its
DO I=1,10 call bar(i) END DO
syntax.
And I don't want to add extra markers as described in
https://www.emacswiki.org/emacs/FoldingMode , I want this
integrated with the individual language modes.
Easy, use ISPF. I don’t recall if THE (xedit clone) has this.
Does ISPF support any Fortran version that has not been outdated for
30 years?
Not sure what you think ISPF is. Here were are talking about the
ISPF editor. The editor doesn't care too much about what language
your are editing. It's language support does include highlighting
keywords but it does that without really understanding the actual
language syntax.
ISPF does have panels to invoke foreground and background compiles.
Those panels are so brain dead that I've never seen any shop make use
of them.
I was going to say we used them extensively, but now that I think back
we didn’t use them at all. It was simpler to have the program wrapped
in a line or so,of JCL and then just SUB it. I don’t think anyone ever
used foreground compilation, our batch was so fast.
There was never any way to get the space on the panel to have all the
header libs and link libs needed for an application compile.

Just about everywhere else I worked programmers used JCL as you
described. To me the biggest problem with background compiles is that
you never knew when they were done. We had a couple hundred programmers
hitting enter all day long waiting for their compile to finish.

Once our development support group changed their stuff to run in the
foreground. The computer center took one look at it and felt compiles
were running too fast. Somehow they concluded that was bad and disabled
it. This was in a shop where the only stuff running was development.

Somehow they never caught on to my stuff.

I set up compile panels that would run foreground or background. They
would handle the same stuff our development support group had or any
ad-hoc compile. Instead of having space for a fixed number of header
libs, link libs, I made the compile panels use TBDISPL (there were
tables on the panel). You could put as many libs on the panel as you
wanted.

With the compile panels I set up, you'd hit enter then the panel would
lock with short messages for compile step, link step showing the
condition code for each step. The compile output did not go to the
spool it went into a PDSE or flat file. If you had an error and wanted
to look at the output you put an "L" (listing) on the command line and
hit enter.

The IBM stuff was just uninspired crap. They could have at least had
one compile panel doing foreground and background. With stuff I wrote,
for foreground you hit enter. For background you put an "S" (sub) on
the command line and hit enter.

The listing file had all the libs used listed at the front.
You might be working on 2 different problems using different
libraries. Instead of re-typing all the libs onto the panel,
you put "X" (extract) on the command line and the panel read
the libs out of the listing and put them on the panel.

I had a lot of fun with that stuff but eventually abandoned ISPF
because the whole process worked even better when driven from UNIX
with Makefiles.
--
Dan Espen
a***@math.uni.wroc.pl
2021-02-06 23:42:36 UTC
Reply
Permalink
Post by Dan Espen
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Every time I've ever heard someone claim that Emacs lacks something
dozens of candidates surface. Emacs has folding-mode,
hide-show, oragami, yafolding. I don't use any of them but I'd be
surprised if there was some requirement you could clearly define that
you couldn't find.
Some years ago (IIRC it was Emacs 23 era) I wanted to have small
cursor in text mode intead of default block one. The answer
from developers was: this is hardcoded with no way to change.
They added configuration parameter to the next version (IIRC 24).

I was surprised, but looking back: it is makes a lot of sense
that _some_ simple features are missing. And it makes sense
that folks like myself find them. Namely, I make little use
of emacs and probably have different ways of doing things
than typical emacs users. So, there is more chance that
I will hit some feature that I need, but nobody earlier bothered
to tell emacs developers that this feature is needed.
--
Waldek Hebisch
Dan Espen
2021-02-07 02:09:52 UTC
Reply
Permalink
Post by a***@math.uni.wroc.pl
Post by Dan Espen
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Every time I've ever heard someone claim that Emacs lacks something
dozens of candidates surface. Emacs has folding-mode,
hide-show, oragami, yafolding. I don't use any of them but I'd be
surprised if there was some requirement you could clearly define that
you couldn't find.
Some years ago (IIRC it was Emacs 23 era) I wanted to have small
cursor in text mode intead of default block one. The answer
from developers was: this is hardcoded with no way to change.
They added configuration parameter to the next version (IIRC 24).
I was surprised, but looking back: it is makes a lot of sense
that _some_ simple features are missing. And it makes sense
that folks like myself find them. Namely, I make little use
of emacs and probably have different ways of doing things
than typical emacs users. So, there is more chance that
I will hit some feature that I need, but nobody earlier bothered
to tell emacs developers that this feature is needed.
I remember that. If I remember right, XEmacs had a few options
with the cursor and when I gave up XEmacs, Emacs didn't have equivalent features.
Now Emacs is loaded with cursor options. I'm using a hbar cursor with
controlled blink. The cursor starts blinking when you pause
then it stops blinking after a little longer.

Other options, box, hollow box, horizontal and vertical bar.
--
Dan Espen
Scott Lurndal
2021-02-06 15:48:57 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
vim has nice syntax features and indentation. And multiple windows,
multiple tabs, folding and a bunch of other features. It's way beyond
vi.
Thomas Koenig
2021-02-06 16:41:47 UTC
Reply
Permalink
Post by Scott Lurndal
Post by Thomas Koenig
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
vim has nice syntax features and indentation.
I know it can do so.

It never worked for me out of the box, so I didn't bother finding out
how to properly set it up. Emacs just works, hit tab and you're set.

(Plus, I do a lot of programming in C for GNU, and emacs is set up for
the GNU style by default :-)
Andreas Kohlbach
2021-02-06 18:36:45 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Jorgen Grahn
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Not sure if it's in my Emacs config or native, but M-q folds the
paragraph the cursor is in here.
--
Andreas
Dan Espen
2021-02-06 18:46:26 UTC
Reply
Permalink
Post by Andreas Kohlbach
Post by Thomas Koenig
Post by Jorgen Grahn
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Not sure if it's in my Emacs config or native, but M-q folds the
paragraph the cursor is in here.
Must be your config. For me it's bound to fill-paragraph.
--
Dan Espen
Andreas Kohlbach
2021-02-06 23:50:14 UTC
Reply
Permalink
Post by Dan Espen
Post by Andreas Kohlbach
Post by Thomas Koenig
Post by Jorgen Grahn
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Not sure if it's in my Emacs config or native, but M-q folds the
paragraph the cursor is in here.
Must be your config. For me it's bound to fill-paragraph.
Isn't that the same?

It wraps the paragraph to fit X characters here.
--
Andreas

PGP fingerprint 952B0A9F12C2FD6C9F7E68DAA9C2EA89D1A370E0
Dan Espen
2021-02-07 02:10:48 UTC
Reply
Permalink
Post by Andreas Kohlbach
Post by Dan Espen
Post by Andreas Kohlbach
Post by Thomas Koenig
Post by Jorgen Grahn
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use both (vi for typing Usenet articles such as this and simple
texts, emacs for its nice syntax features and indentation).
What emacs is lacking is a block-oriented folding mode, where you can
just fold in that other side of the if statement when you want to
look at the condition.
Not sure if it's in my Emacs config or native, but M-q folds the
paragraph the cursor is in here.
Must be your config. For me it's bound to fill-paragraph.
Isn't that the same?
It wraps the paragraph to fit X characters here.
I think the subject is hiding text, not flowing it.
--
Dan Espen
Thomas Koenig
2021-02-06 18:46:31 UTC
Reply
Permalink
Post by Andreas Kohlbach
Not sure if it's in my Emacs config or native, but M-q folds the
paragraph the cursor is in here.
Ah, ambiguous definitions...

What I mean is that, instead of

if (foo)
{
bar();
baz();
}
else
{
foobar();
}

I want to see, when I fold the first bracket

if (foo)
Post by Andreas Kohlbach
{
else
{
foobar();
}

where the > { is some graphical representation that something
has been folded.

KDE's "kate" editor can do so, for example.
Dan Espen
2021-02-06 19:00:02 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Andreas Kohlbach
Not sure if it's in my Emacs config or native, but M-q folds the
paragraph the cursor is in here.
Ah, ambiguous definitions...
What I mean is that, instead of
if (foo)
{
bar();
baz();
}
else
{
foobar();
}
I want to see, when I fold the first bracket
if (foo)
Post by Andreas Kohlbach
{
else
{
foobar();
}
where the > { is some graphical representation that something
has been folded.
KDE's "kate" editor can do so, for example.
That would be forward-sexp.
I'm guessing it's already there but it's a simple matter of

save-excursion
set a mark
forward-sexp
hide region.

I'd expect the hidden mark to be where the hidden text starts and the
last time I worked with hidden text it was marked with ellipses.
So,

if (foo)
{…
else
{
foobar();
}

Well, not sure I actually typed an ellipsis, too small for me to see.
Customizable anyway.
--
Dan Espen
Jim Jackson
2021-02-06 17:05:48 UTC
Reply
Permalink
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use emacs for editing anything big - programming, documents (html, md,
etc) - and vi/vim for small config files and scripts. It's funny how the
fingers seem to switch.
Ted Nolan <tednolan>
2021-02-06 22:33:20 UTC
Reply
Permalink
Post by Jim Jackson
Post by Jorgen Grahn
Post by Scott Lurndal
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
It's still around 50-50 here. The Verilog
types tend to use emacs (and csh), and the software folks vim (and bash/ksh).
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
I use emacs for editing anything big - programming, documents (html, md,
etc) - and vi/vim for small config files and scripts. It's funny how the
fingers seem to switch.
I use "real" vi on FreeBSD. On linux, everytime I get on a new system
I have to spend a good while on the .vimrc to turn off all the stuff
that breaks my mental model of what vi should do.
--
columbiaclosings.com
What's not in Columbia anymore..
Andreas Kohlbach
2021-02-06 18:33:56 UTC
Reply
Permalink
Post by Jorgen Grahn
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
Am using Gnus Emacs to post this article here. :-)
--
Andreas
Dan Espen
2021-02-06 18:43:56 UTC
Reply
Permalink
Post by Andreas Kohlbach
Post by Jorgen Grahn
Around here, very few use vim, and I've not met a fellow Emacs user
for many years.
Am using Gnus Emacs to post this article here. :-)
Lots of us are. Well at least, me too.

Hmm, this sneaked past me. Emacs has had a vi mode for ages.
But I wasn't aware that vim has reached that level too:

https://www.vim.org/scripts/script.php?script_id=300

So, I guess now it doesn't matter which one you use, each emulates the
other if you want.

Back when I started using Emacs, I chose Emacs over vi for one good
reason, Emacs was 100% programmable and vi only allowed the user to set
a few options. Being a programmer, I figured I wanted my editor to be
programmable. Now both editors have reached that level so it doesn't
matter much which you use.
--
Dan Espen
gareth evans
2021-02-04 21:51:32 UTC
Reply
Permalink
Post by Anssi Saari
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
Might this be coming back with Apple's M1 showing how AArch64 can run
circles around x86-64 at lower power?
I can't get my head round assembler on the ARM architecture when there
are no operations on memory locations, with all data firstly having
to be moved into registers.

Too much exposure to PDP11 and X86, I suppose.
Andreas Kohlbach
2021-02-05 01:07:08 UTC
Reply
Permalink
Post by gareth evans
Post by Anssi Saari
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
Might this be coming back with Apple's M1 showing how AArch64 can run
circles around x86-64 at lower power?
I can't get my head round assembler on the ARM architecture when there
are no operations on memory locations, with all data firstly having
to be moved into registers.
Recently I also had a look into ARM Assembly. But...
Post by gareth evans
Too much exposure to PDP11 and X86, I suppose.
... yes, that might be a problem. I had a look into 6502, Z80 and M68000
before. That kind id spoils the learning of something different for me.
--
Andreas
gareth evans
2021-02-05 12:18:39 UTC
Reply
Permalink
Post by Andreas Kohlbach
Post by gareth evans
Post by Anssi Saari
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
Might this be coming back with Apple's M1 showing how AArch64 can run
circles around x86-64 at lower power?
I can't get my head round assembler on the ARM architecture when there
are no operations on memory locations, with all data firstly having
to be moved into registers.
Recently I also had a look into ARM Assembly. But...
Post by gareth evans
Too much exposure to PDP11 and X86, I suppose.
... yes, that might be a problem. I had a look into 6502, Z80 and M68000
before. That kind id spoils the learning of something different for me.
Many moons ago, I did a 3D Space Invaders game for the Oric. This was a
6502 machine code exercise saving back to a cassette recorder.

My approach to dealing with the inevitable bugs in the software was to
occasionally sprinkle a succession of three NOPs into the machine code
so that bugs might be resolved with a 3-byte jump to replace those NOPs.

1983; Ye Gods! 38 years ago!

What progress there has been in those 38 years because going back 38
years prior to then takes us to 1945 with its Williams Tubes and its
mercury delay lines, not to mention 100s of valves (tubes to the Yanks)
that had increased reliability if you never switched them off!
Charlie Gibbs
2021-02-05 19:03:21 UTC
Reply
Permalink
Post by gareth evans
Many moons ago, I did a 3D Space Invaders game for the Oric. This was a
6502 machine code exercise saving back to a cassette recorder.
My approach to dealing with the inevitable bugs in the software was to
occasionally sprinkle a succession of three NOPs into the machine code
so that bugs might be resolved with a 3-byte jump to replace those NOPs.
Still, if you had to jump out somewhere other than where those three NOPs
were, you'd still have to do some fancy footwork. I didn't bother; I'd
just stuff in the jump, put a copy of the overlaid instruction in the
patch area, then continue with the patch code.
Post by gareth evans
1983; Ye Gods! 38 years ago!
Even longer for me. Where does the time go?
Post by gareth evans
What progress there has been in those 38 years because going back 38
years prior to then takes us to 1945 with its Williams Tubes and its
mercury delay lines, not to mention 100s of valves (tubes to the Yanks)
that had increased reliability if you never switched them off!
Nowadays it's hard drives that are more reliable if you never switch
them off. I run my boxes 24/7 and have hardly ever had to replace
a drive before the whole machine is superseded after 10 years or so.
--
/~\ Charlie Gibbs | "Some of you may die,
\ / <***@kltpzyxm.invalid> | but it's a sacrifice
X I'm really at ac.dekanfrus | I'm willing to make."
/ \ if you read it the right way. | -- Lord Farquaad (Shrek)
Joe Pfeiffer
2021-02-05 19:23:07 UTC
Reply
Permalink
Post by gareth evans
Many moons ago, I did a 3D Space Invaders game for the Oric. This was a
6502 machine code exercise saving back to a cassette recorder.
My approach to dealing with the inevitable bugs in the software was to
occasionally sprinkle a succession of three NOPs into the machine code
so that bugs might be resolved with a 3-byte jump to replace those NOPs.
1983; Ye Gods! 38 years ago!
What progress there has been in those 38 years because going back 38
years prior to then takes us to 1945 with its Williams Tubes and its
mercury delay lines, not to mention 100s of valves (tubes to the Yanks)
that had increased reliability if you never switched them off!
Ah, yes. Just a couple of years later I was doing some programming on a
preproduction Fairchild F9450 (Mil-Std 1750A architecture). I
discovered that occasionally specific extra high order bits were set in
the PC when an interrupt took place -- hardware bug. I wound up
inserting jumps at the addresses the PC wound up at, back to where it
was supposed to be.

I had to do extensive bowdlerizing of the comments in my code before
turning it over to them. Then the other programmer on the project took
the wrong floppy with him... fortunately they had a good sense of humor
about it.

At the time, yield was being measured in wafers per chip. I don't know
if they ever did get it to something useable, or if it ever went into
actual production.
Anssi Saari
2021-02-06 08:52:11 UTC
Reply
Permalink
Post by gareth evans
Many moons ago, I did a 3D Space Invaders game for the Oric. This was a
6502 machine code exercise saving back to a cassette recorder.
I remember that name, it seems Oric sold here in Finland back then. I
don't think I ever used one though.

In fact, with a quick look at a magazine archive, when the first(ish?)
home computer magazine "Mikrobitti" started here in 1984, the first
issue had a little basic game listing for the Oric.
Thomas Koenig
2021-02-05 12:39:57 UTC
Reply
Permalink
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and M68000
before. That kind id spoils the learning of something different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.

When I first laid hands on a handbook for the 68000, I thought "This is
not assembler, this is a high-level language."
gareth evans
2021-02-05 13:11:26 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and M68000
before. That kind id spoils the learning of something different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.
When I first laid hands on a handbook for the 68000, I thought "This is
not assembler, this is a high-level language."
IMHO the 68000 showed its descent from the PDP11 but with the
lamentable loss of the extra level of indirect addressing
on every address mode, indirect anyway, auto increment,
auto decrement and indexed.
Peter Flass
2021-02-05 18:37:48 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and M68000
before. That kind id spoils the learning of something different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.
When I first laid hands on a handbook for the 68000, I thought "This is
not assembler, this is a high-level language."
Wonderful architectuere.
--
Pete
Charlie Gibbs
2021-02-05 19:36:46 UTC
Reply
Permalink
Post by Peter Flass
Post by Thomas Koenig
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and M68000
before. That kind id spoils the learning of something different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.
When I first laid hands on a handbook for the 68000, I thought "This is
not assembler, this is a high-level language."
Wonderful architectuere.
Yes, and one more example of how it's better to be first than best.
As someone once quipped, "It's a good thing iAPX432 didn't catch on.
Otherwise, a truly horrible Intel architecture might have taken over
the world."
--
/~\ Charlie Gibbs | "Some of you may die,
\ / <***@kltpzyxm.invalid> | but it's a sacrifice
X I'm really at ac.dekanfrus | I'm willing to make."
/ \ if you read it the right way. | -- Lord Farquaad (Shrek)
maus
2021-02-05 20:39:28 UTC
Reply
Permalink
Post by Charlie Gibbs
Post by Peter Flass
Post by Thomas Koenig
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and M68000
before. That kind id spoils the learning of something different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.
When I first laid hands on a handbook for the 68000, I thought "This is
not assembler, this is a high-level language."
Wonderful architectuere.
Yes, and one more example of how it's better to be first than best.
As someone once quipped, "It's a good thing iAPX432 didn't catch on.
Otherwise, a truly horrible Intel architecture might have taken over
the world."
Somewhat similiar, after coming out of a film years ago, another child
said, "Arent we lucky that the Germans and Japs did not win the war."
--
***@mail.com
Peter Flass
2021-02-06 20:54:07 UTC
Reply
Permalink
Post by maus
Post by Charlie Gibbs
Post by Peter Flass
Post by Thomas Koenig
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and M68000
before. That kind id spoils the learning of something different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.
When I first laid hands on a handbook for the 68000, I thought "This is
not assembler, this is a high-level language."
Wonderful architectuere.
Yes, and one more example of how it's better to be first than best.
As someone once quipped, "It's a good thing iAPX432 didn't catch on.
Otherwise, a truly horrible Intel architecture might have taken over
the world."
Somewhat similiar, after coming out of a film years ago, another child
said, "Arent we lucky that the Germans and Japs did not win the war."
Besdes being horrible it was very limited. I forget the hardware segment
size, but it was very small for something that was supposed to be a “micro
mainframe”.
--
Pete
Kerr-Mudd,John
2021-02-06 11:18:01 UTC
Reply
Permalink
Post by Peter Flass
Post by Thomas Koenig
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and
M68000 before. That kind id spoils the learning of something
different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.
When I first laid hands on a handbook for the 68000, I thought "This
is not assembler, this is a high-level language."
Wonderful architectuere.
But now it's pining for the fjords?
--
Bah, and indeed, Humbug.
Thomas Koenig
2021-02-06 11:27:25 UTC
Reply
Permalink
Post by Kerr-Mudd,John
Post by Peter Flass
Post by Thomas Koenig
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and
M68000 before. That kind id spoils the learning of something
different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.
When I first laid hands on a handbook for the 68000, I thought "This
is not assembler, this is a high-level language."
Wonderful architectuere.
But now it's pining for the fjords?
ColdFire is still being sold, but AFAIK it implements a subset of
the 68000 ISA.
Peter Flass
2021-02-06 20:54:08 UTC
Reply
Permalink
Post by Thomas Koenig
Post by Kerr-Mudd,John
Post by Peter Flass
Post by Thomas Koenig
Post by Andreas Kohlbach
... yes, that might be a problem. I had a look into 6502, Z80 and
M68000 before. That kind id spoils the learning of something
different for me.
I did a bit of assembler on a 6502 (a C64), and some on a Z80.
When I first laid hands on a handbook for the 68000, I thought "This
is not assembler, this is a high-level language."
Wonderful architectuere.
But now it's pining for the fjords?
ColdFire is still being sold, but AFAIK it implements a subset of
the 68000 ISA.
No memory management, I believe.
--
Pete
Ahem A Rivet's Shot
2021-02-06 21:14:48 UTC
Reply
Permalink
On Sat, 6 Feb 2021 13:54:08 -0700
Post by Peter Flass
Post by Thomas Koenig
ColdFire is still being sold, but AFAIK it implements a subset of
the 68000 ISA.
No memory management, I believe.
The 68000 didn't have memory management originally, there was a
separate MMU chip. I once used a unix(ish) system with a 68000 and no MMU!
It was fragile and the OS was older than fsck so I got familiar with ncheck
and icheck as we had to repair the file system by hand every time it crashed
(which as it was used for C development was pretty often).
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Niklas Karlsson
2021-02-06 10:47:02 UTC
Reply
Permalink
Post by Anssi Saari
Post by Thomas Koenig
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Has this been decided then? What about Emacs in Windows? Actually I
probably would've abandoned Emacs if it weren't for the VHDL and ORG
modes.
I run Emacs for a sole purpose that's probably rather obscure outside
certain circles in this country: LysKOM. KOM is a kind of bulletin board
system that first turned up in the late 1970s. Like many other older
bulletin board solutions, it is in some ways a great improvement on its
successors. I don't believe it ever had much traction outside Sweden,
though I think someone managed to push it (in an English-language
version known as COM) as an EU standard (that very few used).

It received some Swedish media attention in the 1980s due to being
involved in a libel case.

KOM initially worked by logging in timesharing style, so the client was
local only, but LysKOM speaks TCP/IP. It is so named because it was
developed by Lysator, the computing society at the University of
Linköping.

There are various LysKOM clients including a JavaScript one, but the
Emacs LISP one is the most complete and feature-rich.

Wow, I'm actually on topic!

Niklas
--
In college, I wrote a TECO-like progamming language as a joke - one-letter
statements, totally unreadable. Then I discovered sendmail, and stopped,
because the joke had been done so much better than I ever could.
-- Mark 'Kamikaze' Hughes
Freddy1X
2021-02-04 21:50:05 UTC
Reply
Permalink
Post by Thomas Koenig
The proverbial computer holy war is probably big vs. little endian,
which has been pretty much decided in favor of little endian
by default or by Intel, although TCP/IP is big-endian.
Machine language vs. those CPU-time-wasting assemblers - made
obsolescent by high-level programming languages.
High-level vs. assembler: Hardly anybody does assembler any more.
Structured programming vs. goto - structured programming won.
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
VMS vs. Unix - decided by DEC's fate, and by Linux.
DECNET vs. TCP/IP: See above.
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Everybody vs. Fortran: Hating FORTRAN become the very definition
of a computer scientist. They didn't notice that, since 1991,
it has become quite a modern programming language.
Others?
PDA Vs. smartphone.

There is mainframe Vs. PC/distributed Vs. laptop Vs. pocket thingy.

Freddy,
pick three of tha above,
--
Not for sale to minors.

/|>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\|
/| I may be demented \|
/| but I'm not crazy! \|
/|<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<\|
* SPAyM trap: there is no X in my address *
J. Clarke
2021-02-04 23:39:10 UTC
Reply
Permalink
Post by Freddy1X
Post by Thomas Koenig
The proverbial computer holy war is probably big vs. little endian,
which has been pretty much decided in favor of little endian
by default or by Intel, although TCP/IP is big-endian.
Machine language vs. those CPU-time-wasting assemblers - made
obsolescent by high-level programming languages.
High-level vs. assembler: Hardly anybody does assembler any more.
Structured programming vs. goto - structured programming won.
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
VMS vs. Unix - decided by DEC's fate, and by Linux.
DECNET vs. TCP/IP: See above.
Emacs vs. vi: vim has led to a resurgence of vi, and many people
are using this even on Windows.
Everybody vs. Fortran: Hating FORTRAN become the very definition
of a computer scientist. They didn't notice that, since 1991,
it has become quite a modern programming language.
Others?
PDA Vs. smartphone.
I'm not sure that was ever a "holy war". Smartphones were a natural
evolution of PDAs.
Post by Freddy1X
There is mainframe Vs. PC/distributed Vs. laptop Vs. pocket thingy.
To some extent that's still going.
Post by Freddy1X
Freddy,
pick three of tha above,
John Levine
2021-02-04 22:14:45 UTC
Reply
Permalink
Post by Thomas Koenig
Machine language vs. those CPU-time-wasting assemblers - made
obsolescent by high-level programming languages.
Actually, made obsolescent by random access main memory. As we recall
from the Story of Mel, if your main memory is a drum, your program
runs a lot faster if it's laid out to minimize rotational delay. And
also if you know where each instruction is, you can save a few words
by using the instructions as constants. That lived on as long as the
PDP-11 where the Unix C compiler had a peephole optimization to turn

OP #N,N(R)

into

OP (PC),N(R)

to use the same instruction word as both the constant N and the index offset N.
Post by Thomas Koenig
High-level vs. assembler: Hardly anybody does assembler any more.
FORTRAN did that, but producing shockingly good code. It did stuff
that's still pretty advanced, like nested loop merging.
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
Technology did that. CISC made sense when RAM was expensive, microcode
ROM was faster than RAM, and caches were too expensive for anything but
high end mainframes. That was then.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
J. Clarke
2021-02-04 23:40:42 UTC
Reply
Permalink
Post by John Levine
Post by Thomas Koenig
Machine language vs. those CPU-time-wasting assemblers - made
obsolescent by high-level programming languages.
Actually, made obsolescent by random access main memory. As we recall
from the Story of Mel, if your main memory is a drum, your program
runs a lot faster if it's laid out to minimize rotational delay. And
also if you know where each instruction is, you can save a few words
by using the instructions as constants. That lived on as long as the
PDP-11 where the Unix C compiler had a peephole optimization to turn
OP #N,N(R)
into
OP (PC),N(R)
to use the same instruction word as both the constant N and the index offset N.
Post by Thomas Koenig
High-level vs. assembler: Hardly anybody does assembler any more.
FORTRAN did that, but producing shockingly good code. It did stuff
that's still pretty advanced, like nested loop merging.
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
Technology did that. CISC made sense when RAM was expensive, microcode
ROM was faster than RAM, and caches were too expensive for anything but
high end mainframes. That was then.
So you're saying Intel is not CISC?
John Levine
2021-02-05 03:04:43 UTC
Reply
Permalink
Post by J. Clarke
Post by John Levine
Technology did that. CISC made sense when RAM was expensive, microcode
ROM was faster than RAM, and caches were too expensive for anything but
high end mainframes. That was then.
So you're saying Intel is not CISC?
I think that x86 tells us that with enough thrust pigs can indeed fly.

Your point about the enormously long X86 architecture manual is a good one but it feels
to me less like Intel is making the instruction set more complex and more like it's
adding coprocessors that are started by instructions. Dunno how useful a distinction
it is but I think you can make a distinction between the VAX approach of making every
instruction and every address mode utterly general and the x86 adding features to speed
up graphics arithmetic or enable nested virtualization.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Thomas Koenig
2021-02-06 11:24:27 UTC
Reply
Permalink
Post by John Levine
Post by J. Clarke
Post by John Levine
Technology did that. CISC made sense when RAM was expensive, microcode
ROM was faster than RAM, and caches were too expensive for anything but
high end mainframes. That was then.
So you're saying Intel is not CISC?
I think that x86 tells us that with enough thrust pigs can indeed fly.
That has already been established by the F-4 Phantom. "Iron pig"
(Eisenschwein) was one of its nicknames in the German air force, another
was "air defence Diesel" (Luftverteidigungsdiesel) due to its highly
visible exhaust trail.
Thomas Koenig
2021-02-05 13:35:15 UTC
Reply
Permalink
Post by John Levine
CISC made sense when RAM was expensive, microcode
ROM was faster than RAM, and caches were too expensive for anything but
high end mainframes. That was then.
Interestingly enough, the first PDP-11 with cache was introdued
1975, the Data General Eclipse had a cache with its introduction
in 1974.

Seems like an idea whose time had come for minicomputers in the
mid-1970's, when CISC was in full swing.
Elliott Roper
2021-02-05 17:27:09 UTC
Reply
Permalink
Post by Thomas Koenig
Post by John Levine
CISC made sense when RAM was expensive, microcode
ROM was faster than RAM, and caches were too expensive for anything but
high end mainframes. That was then.
Interestingly enough, the first PDP-11 with cache was introdued
1975, the Data General Eclipse had a cache with its introduction
in 1974.
Seems like an idea whose time had come for minicomputers in the
mid-1970's, when CISC was in full swing.
It missed the bus (pun intended)
the J-11 (aka 11/73) Q-bus main memory was near as dammit quick as the 11/70's
cache.
--
To de-mung my e-mail address:- fsnospam$elliott$$
PGP Fingerprint: 1A96 3CF7 637F 896B C810 E199 7E5C A9E4 8E59 E248
Dan Espen
2021-02-05 00:14:04 UTC
Reply
Permalink
Post by Thomas Koenig
RISC vs. CISC: The really complex CISC-architectures died out.
The difference is now less important with superscalar architectures.
So I'm thinking, wait a minute, isn't X86 CISC? I've seen some of it's
instruction set, sure looks CISCy to me. Here's what Google says:

x86 is definitely CISC, but one of the first things a modern x86 CPU
does with an instruction stream is convert it into a different
instruction set that it uses internally, which is (but doesn't have to
be) more RISC-like. Effectively, they appear as CISC to the outside
world, but are RISC under the hood.Nov 28, 2018

So I'm guessing the other big CISC player is Z/Series. Probably the
same cop-out applies.

I started out programming an IBM 14xx which took CISC to a whole new
level since even memory addressing was in decimal. A thoroughly
pleasant environment to work in. So, big CISC fan here.
--
Dan Espen
John Levine
2021-02-05 03:14:06 UTC
Reply
Permalink
Post by Dan Espen
x86 is definitely CISC, but one of the first things a modern x86 CPU
does with an instruction stream is convert it into a different
instruction set that it uses internally, which is (but doesn't have to
be) more RISC-like. Effectively, they appear as CISC to the outside
world, but are RISC under the hood.Nov 28, 2018
So I'm guessing the other big CISC player is Z/Series. Probably the
same cop-out applies.
It's even more like that than you imagine.

The complex zSeries instructions are implemented in millicode, which is vertical microcode
consisting of simpler zSeries instructions implemented in hardware.

https://www.cmg.org/wp-content/uploads/2016/08/the-what-and-why-of-system-z-millicode.pdf

https://www.researchgate.net/publication/224103049_Millicode_in_an_IBM_zSeries_processor
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Loading...