Discussion:
ISA
(too old to reply)
Elijah Stone
2023-08-31 07:29:11 UTC
Permalink
I am looking for information on the history and development of ISAs as a
technology. Including:

- Why were they invented? (IBM wanted to sell their customers hardware
upgrades without requiring that they rewrite their code?)

- What was early sentiment towards them like? (From users, from
compiler-writers, from other CPU designers...)

- What was their adoption story like? How long did it take before they were
basically totally ubiquitous, as today?

- What were the _social_ factors that caused GPUs to expose primarily
higher-level interfaces, leaving the lower-level ones unstable and mostly
unused, unlike CPUs?

Any pointers?
Theo Markettos
2023-08-31 10:37:50 UTC
Permalink
Post by Elijah Stone
I am looking for information on the history and development of ISAs as a
- Why were they invented? (IBM wanted to sell their customers hardware
upgrades without requiring that they rewrite their code?)
I think this is slightly the wrong question. Every stored program computer
has an ISA, being the encoding of stored instructions to logical operations.
In the early days the ISA was unique to the machine, but the machines
started as one-offs anyway.

What you're talking about is the split between architecture (the
specification that describes the programmer-facing model) and
microarchitecture (how the architecture is realised in a particular instance
of the machine, in terms of its design). The ISA is one feature of the
architecture, but not the only one - eg the memory model is another
architectural feature.

The general principles existed from the beginning, but not the clear
separation. IBM in the System/360 implemented a single architecture via
different microarchitectures. I'm not sure if they were the first.

Perhaps the question is more about the development process: first design a
specification for how the programming model would look. Then hand it over
to somebody to design implementation(s). Oftentimes those two would be the
same people, resulting in mixed up architectures, or optimised for a
specific set of implementation conditions. That happens when the team or
budget is small - even in the mid 80s the 32-bit ARM was designed this way.
Post by Elijah Stone
- What was early sentiment towards them like? (From users, from
compiler-writers, from other CPU designers...)
- What was their adoption story like? How long did it take before they were
basically totally ubiquitous, as today?
Assuming you're talking about the fixed architectural specifications, it's
still common to build your own ISA for a special purpose machine. That is
typically deeply embedded so it is not publically-visible, but it's there.

RISC-V makes some effort to embrace this with its support for architectural
extensions: rather than doing you own entirely custom architecture, you can
still do it but use their boring baseline (add/sub/load/store/etc) instead
of rolling your own. But there is still a lot of rolling your own going on,
especially for things which aren't intended to run traditional software.
Post by Elijah Stone
- What were the _social_ factors that caused GPUs to expose primarily
higher-level interfaces, leaving the lower-level ones unstable and mostly
unused, unlike CPUs?
GPUs were not originally general purpose machines, they started off being
for graphics. Hence the various APIs like DirectX and OpenGL built up which
abstracted away the GPU as a processor - the application makes an OpenGL
call to a large piece of CPU-side software (called the 'driver', with
userspace and kernel parts) that 'does something' using the GPU to make it
happen - the 'something' is entirely opaque to the programmer. This model
allowed vendors to radically change how the 'something' worked behind the
scenes (multiple times), from fixed-function pipelines (not Turing complete)
through to SIMT processors.

Latterly vendors have exposed more 'processor' like APIs (CUDA, OpenCL) that
expose more of the SIMT side of things to the programmer. But that's
because they started off not exposing the programmer to the archiecture,
just to the API, and so they don't want software to hard code the
architecture.

(a fun alternative here was Intel's Larrabee/Xeon Phi, where they raided
their dusty chest of blueprints and produced a chip with lots of tiny
Pentium1 CPUs on it. There the architecture was fixed into the programming
model. It didn't do well, although not entirely due to this choice)

Another driving factor here is that CPUs are now fast enough to do
compilation of the GPU code at runtime. Because we don't know what specific
GPU the application will be running on ahead of time, the driver may contain
a full compiler (eg LLVM) that takes the application's code (in an
architecture-neutral format, not necessarily source code) and compiles it
for the specific GPU we have in the system. Because we don't need to ship a
precompiled binary, this allows more flexibility in that the newly-released
N+1th generation of GPU may have a completely different architecture to the
Nth.

Theo
Peter Flass
2023-08-31 13:10:57 UTC
Permalink
Post by Theo Markettos
GPUs were not originally general purpose machines, they started off being
for graphics. Hence the various APIs like DirectX and OpenGL built up which
abstracted away the GPU as a processor - the application makes an OpenGL
call to a large piece of CPU-side software (called the 'driver', with
userspace and kernel parts) that 'does something' using the GPU to make it
happen - the 'something' is entirely opaque to the programmer. This model
allowed vendors to radically change how the 'something' worked behind the
scenes (multiple times), from fixed-function pipelines (not Turing complete)
through to SIMT processors.
Latterly vendors have exposed more 'processor' like APIs (CUDA, OpenCL) that
expose more of the SIMT side of things to the programmer. But that's
because they started off not exposing the programmer to the archiecture,
just to the API, and so they don't want software to hard code the
architecture.
(a fun alternative here was Intel's Larrabee/Xeon Phi, where they raided
their dusty chest of blueprints and produced a chip with lots of tiny
Pentium1 CPUs on it. There the architecture was fixed into the programming
model. It didn't do well, although not entirely due to this choice)
Another driving factor here is that CPUs are now fast enough to do
compilation of the GPU code at runtime. Because we don't know what specific
GPU the application will be running on ahead of time, the driver may contain
a full compiler (eg LLVM) that takes the application's code (in an
architecture-neutral format, not necessarily source code) and compiles it
for the specific GPU we have in the system. Because we don't need to ship a
precompiled binary, this allows more flexibility in that the newly-released
N+1th generation of GPU may have a completely different architecture to the
Nth.
This always was a terrible model. For a long time it hindered development
of open source drivers for GPUs, leaving only (ugh!) Windows having drivers
that supported all the features. I don’t really follow the situation that
closely, my impression is that this has gotten a lot better but is still
not perfect. GPU designers really need to open up their specs.
--
Pete
Theo
2023-08-31 21:12:15 UTC
Permalink
Post by Peter Flass
This always was a terrible model. For a long time it hindered development
of open source drivers for GPUs, leaving only (ugh!) Windows having drivers
that supported all the features. I don’t really follow the situation that
closely, my impression is that this has gotten a lot better but is still
not perfect. GPU designers really need to open up their specs.
I would partially disagree: the open source drivers that exist today are
mostly first-party drivers. In other words AMD wrote their driver and chose
to open source it. The GPU is complex enough that it's the most efficient
way to do it - the team who writes the driver has access to the hardware
source code, which the community doesn't have.

In general no devices have complete enough documentation to write full
third-party drivers, because writing that documentation is very expensive
and the audience for that documentation is very small (just the handful of
driver writers). Intel is probably best at this, but their documentation is
targeted specifically at open source driver writers (and so there may be
things missing that are used by eg the Windows driver).

There are a few third party drivers based on reverse engineering (nouveau
for Nvidia, Asahi Linux's Apple Silicon GPU driver). They are at best
partial implementations. It's amazing they exist, but better would have
been for nvidia or Apple to open source their first-party code.

If you decide that GPUs should conform to an stable architectural model,
like CPUs do, you are leaving a lot of performance on the table. We
wouldn't have the GPUs of today if they were still the same architectures of
decades ago (GPGPU compute wouldn't even be a thing).

Theo
Peter Flass
2023-09-01 00:57:43 UTC
Permalink
Post by Theo
Post by Peter Flass
This always was a terrible model. For a long time it hindered development
of open source drivers for GPUs, leaving only (ugh!) Windows having drivers
that supported all the features. I don’t really follow the situation that
closely, my impression is that this has gotten a lot better but is still
not perfect. GPU designers really need to open up their specs.
I would partially disagree: the open source drivers that exist today are
mostly first-party drivers. In other words AMD wrote their driver and chose
to open source it. The GPU is complex enough that it's the most efficient
way to do it - the team who writes the driver has access to the hardware
source code, which the community doesn't have.
In general no devices have complete enough documentation to write full
third-party drivers, because writing that documentation is very expensive
and the audience for that documentation is very small (just the handful of
driver writers). Intel is probably best at this, but their documentation is
targeted specifically at open source driver writers (and so there may be
things missing that are used by eg the Windows driver).
There are a few third party drivers based on reverse engineering (nouveau
for Nvidia, Asahi Linux's Apple Silicon GPU driver). They are at best
partial implementations. It's amazing they exist, but better would have
been for nvidia or Apple to open source their first-party code.
If you decide that GPUs should conform to an stable architectural model,
like CPUs do, you are leaving a lot of performance on the table. We
wouldn't have the GPUs of today if they were still the same architectures of
decades ago (GPGPU compute wouldn't even be a thing).
Theo
I don’t claim to be a hardware expert, but it seems to me that graphics
architecture is analogous to other computer architecture, at a higher
level. Computer ISA’s have primitives like load, or, add, subtract, etc.
Graphics primitives are things like draw a line, draw a solid, rotate an
image, etc. Just like mainframes are microprogrammed, the graphics chip
could be microprogrammed and expose the high-level functions to the driver.
This leaves lots of room for optimization in the microcode while presenting
a stable interface to the outside world. Maybe this is a hopelessly naive
idea.
--
Pete
John Dallman
2023-09-04 15:32:00 UTC
Permalink
Post by Theo Markettos
(a fun alternative here was Intel's Larrabee/Xeon Phi, where they
raided their dusty chest of blueprints and produced a chip with
lots of tiny Pentium1 CPUs on it. There the architecture was fixed
into the programming model. It didn't do well, although not entirely
due to this choice)
They wanted to solve the GPU programming problem by applying an existing
ISA. It might have gone better had they been able to write software that
made it an effective GPU, but Intel were unable to do that. They tried to
sell it as a CPU offload system, but its old-fashioned floating point and
limited access to memory doomed that, too.

John
Quadibloc
2023-09-07 14:30:52 UTC
Permalink
Post by Elijah Stone
I am looking for information on the history and development of ISAs as a
- Why were they invented? (IBM wanted to sell their customers hardware
upgrades without requiring that they rewrite their code?)
I think this is slightly the wrong question. Every stored program computer
has an ISA, being the encoding of stored instructions to logical operations.
In the early days the ISA was unique to the machine, but the machines
started as one-offs anyway.
But given his reference to IBM, apparently he is thinking of the ISA as existing
as a "thing" only when it is shared by more than one model of a computer.

Since the EDSAC and the EDVAC, built separately according to the same plan,
shared an instruction set, but were also basically the same implementation, I'm
not sure they would count by his standards.

Before the IBM 360, the IBM 709 was upwards-compatible with the IBM 704,
and Univac made the 1103 and the 1103A.

The SDS 910, 920, and 930 were a series of three machines of different sizes
available from Scientific Data Systems which shared an instruction set for a
24-bit word. And the GE 225 and 235 were part of a compatible series with a
20 bit word also.

John Savard
Peter Flass
2023-09-09 01:07:05 UTC
Permalink
Post by Quadibloc
Post by Elijah Stone
I am looking for information on the history and development of ISAs as a
- Why were they invented? (IBM wanted to sell their customers hardware
upgrades without requiring that they rewrite their code?)
I think this is slightly the wrong question. Every stored program computer
has an ISA, being the encoding of stored instructions to logical operations.
In the early days the ISA was unique to the machine, but the machines
started as one-offs anyway.
But given his reference to IBM, apparently he is thinking of the ISA as existing
as a "thing" only when it is shared by more than one model of a computer.
Since the EDSAC and the EDVAC, built separately according to the same plan,
shared an instruction set, but were also basically the same implementation, I'm
not sure they would count by his standards.
Before the IBM 360, the IBM 709 was upwards-compatible with the IBM 704,
and Univac made the 1103 and the 1103A.
The SDS 910, 920, and 930 were a series of three machines of different sizes
available from Scientific Data Systems which shared an instruction set for a
24-bit word. And the GE 225 and 235 were part of a compatible series with a
20 bit word also.
I wasn’t aware of the incompatibilities among various PDP-11 models, but
the -10s seemed to have areas of incompatibility also. I believe IBM was
the first to define an “architecture” independent of any particular
implementation, and then make sure (nearly) all S/360 models conformed to
it.
--
Pete
Scott Lurndal
2023-09-09 15:26:29 UTC
Permalink
Post by Quadibloc
Post by Elijah Stone
I am looking for information on the history and development of ISAs as a
- Why were they invented? (IBM wanted to sell their customers hardware
upgrades without requiring that they rewrite their code?)
I think this is slightly the wrong question. Every stored program computer
has an ISA, being the encoding of stored instructions to logical operations.
In the early days the ISA was unique to the machine, but the machines
started as one-offs anyway.
But given his reference to IBM, apparently he is thinking of the ISA as existing
as a "thing" only when it is shared by more than one model of a computer.
Since the EDSAC and the EDVAC, built separately according to the same plan,
shared an instruction set, but were also basically the same implementation, I'm
not sure they would count by his standards.
Before the IBM 360, the IBM 709 was upwards-compatible with the IBM 704,
and Univac made the 1103 and the 1103A.
The SDS 910, 920, and 930 were a series of three machines of different sizes
available from Scientific Data Systems which shared an instruction set for a
24-bit word. And the GE 225 and 235 were part of a compatible series with a
20 bit word also.
I wasn’t aware of the incompatibilities among various PDP-11 models, but
the -10s seemed to have areas of incompatibility also. I believe IBM was
the first to define an “architecture” independent of any particular
implementation, and then make sure (nearly) all S/360 models conformed to
it.
Burroughs early architectures (medium and large systems (B6xxx and successor))
were independent of any particular implementation. And like the 360 family,
new features were added to the architecture to provide new capabilities in
a backward compatable fashion. The B3500 was roughly contemporaneous to
the 360 family.
Johnny Billquist
2023-09-09 19:02:04 UTC
Permalink
Post by Peter Flass
I wasn’t aware of the incompatibilities among various PDP-11 models, but
the -10s seemed to have areas of incompatibility also. I believe IBM was
the first to define an “architecture” independent of any particular
implementation, and then make sure (nearly) all S/360 models conformed to
it.
The incompatibilities between PDP-11 models mostly came about because
the architecture wasn't really formally defined, but was a lot of
general understanding and "see how the previous implementation did it".
However, in some cases, some slight differences occurred because for
various reasons it was easier/better for the implementors to have a
slightly different behavior, and the architecture didn't have a proper
definition, so it could be said to be wrong.

It's mostly somewhat odd/unused sequences that behave differently.

For example:

MOV R0,(R0)+

the question is, the R0 that is written, is it the value before or after
the increment? Undefined. Different models do it differently.
But this is not a sequence you'd normally be using in your code.

Some instructions might also affect the processor status flags
differently. SWAB affecting the V (overflow) flag, for example.

And of course, the architecture was extended a bit, causing some
additional incompatibilities. Not all PDP-11s have the RTT instruction,
for example.


Because of experiences with the PDP-11, DEC did do a very formal
definition for things on the VAX, to avoid something similar. There are
still a few differences on some VAX CPUs, but I think they probably
classify as bugs.

Johnny
Rich Alderson
2023-09-09 22:24:48 UTC
Permalink
I wasn't aware of the incompatibilities among various PDP-11 models, but the
-10s seemed to have areas of incompatibility also. I believe IBM was the
first to define an "architectureâ" independent of any particular
implementation, and then make sure (nearly) all S/360 models conformed to it.
Other than the addition of extended addressing to the KL-10 and KS-10, what
did you have in mind for the PDP-10 systems? Asking for a friend... ;->
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Peter Flass
2023-09-09 23:34:08 UTC
Permalink
Post by Rich Alderson
I wasn't aware of the incompatibilities among various PDP-11 models, but the
-10s seemed to have areas of incompatibility also. I believe IBM was the
first to define an "architectureв" independent of any particular
implementation, and then make sure (nearly) all S/360 models conformed to it.
Other than the addition of extended addressing to the KL-10 and KS-10, what
did you have in mind for the PDP-10 systems? Asking for a friend... ;->
I don’t have any specific knowledge, other than Barb’s postings about the
travails of the OS guys trying to support new hardware. (mostly
peripherals, but some CPU considerations, I think)
--
Pete
Johnny Billquist
2023-09-10 00:23:40 UTC
Permalink
Post by Rich Alderson
I wasn't aware of the incompatibilities among various PDP-11 models, but the
-10s seemed to have areas of incompatibility also. I believe IBM was the
first to define an "architectureā" independent of any particular
implementation, and then make sure (nearly) all S/360 models conformed to it.
Other than the addition of extended addressing to the KL-10 and KS-10, what
did you have in mind for the PDP-10 systems? Asking for a friend... ;->
Was byte instructions always in there?
Or was that something that changed just between the -6 and -10?

Obviously, for OSes, there were differences, but that was mainly on I/O
and pager, and weren't anything about instructions as such.

Johnny
Lars Brinkhoff
2023-09-10 06:07:03 UTC
Permalink
Post by Johnny Billquist
Post by Rich Alderson
Other than the addition of extended addressing to the KL-10 and KS-10, what
did you have in mind for the PDP-10 systems? Asking for a friend... ;->
Was byte instructions always in there?
Or was that something that changed just between the -6 and -10?
Yes, they were theresince the PDP-6. On the KA10 they were "optional".

There were some minor changes between the PDP-6 and the KA10. They are easily avoidable, but strictly speaking, they were not 100% compatible. The KI10 added a few instructions, but was otherwise backwards compatible. The KL10 added a bunch of new instructions. The KS10 kept the same user mode instruction set.
Johnny Billquist
2023-09-10 13:09:04 UTC
Permalink
Post by Lars Brinkhoff
Post by Johnny Billquist
Post by Rich Alderson
Other than the addition of extended addressing to the KL-10 and KS-10, what
did you have in mind for the PDP-10 systems? Asking for a friend... ;->
Was byte instructions always in there?
Or was that something that changed just between the -6 and -10?
Yes, they were theresince the PDP-6. On the KA10 they were "optional".
There were some minor changes between the PDP-6 and the KA10. They are easily avoidable, but strictly speaking, they were not 100% compatible. The KI10 added a few instructions, but was otherwise backwards compatible. The KL10 added a bunch of new instructions. The KS10 kept the same user mode instruction set.
So there are some incompatibilities between PDP-10 models. Not as many
as for the PDP-11, and it's more in the line of added instructions. But
it still means differences exist. Which a program could even use to
figure out what model of CPU it is running on.

Johnny
Lars Brinkhoff
2023-09-10 16:11:51 UTC
Permalink
Post by Johnny Billquist
So there are some incompatibilities between PDP-10 models. Not as many
as for the PDP-11, and it's more in the line of added instructions. But
it still means differences exist. Which a program could even use to
figure out what model of CPU it is running on.
Yes, and there even were such programs in the official DEC documentations. However, they don't use any instructions are are undefined on the earlier models, but more minute details.
Johnny Billquist
2023-09-10 23:53:14 UTC
Permalink
Post by Lars Brinkhoff
Post by Johnny Billquist
So there are some incompatibilities between PDP-10 models. Not as many
as for the PDP-11, and it's more in the line of added instructions. But
it still means differences exist. Which a program could even use to
figure out what model of CPU it is running on.
Yes, and there even were such programs in the official DEC documentations. However, they don't use any instructions are are undefined on the earlier models, but more minute details.
Oh ho... More differences then. :-)
Like on a PDP-11, where SWAB might set or clear the V flag, depending on
model.

Or how a PDP-8 reacts if you combine RAL and RAR.

So much fun with the weird small details.

Johnny
Peter Flass
2023-09-11 19:04:00 UTC
Permalink
Post by Johnny Billquist
Post by Lars Brinkhoff
Post by Johnny Billquist
So there are some incompatibilities between PDP-10 models. Not as many
as for the PDP-11, and it's more in the line of added instructions. But
it still means differences exist. Which a program could even use to
figure out what model of CPU it is running on.
Yes, and there even were such programs in the official DEC
documentations. However, they don't use any instructions are are
undefined on the earlier models, but more minute details.
Oh ho... More differences then. :-)
Like on a PDP-11, where SWAB might set or clear the V flag, depending on
model.
Or how a PDP-8 reacts if you combine RAL and RAR.
So much fun with the weird small details.
If you’re not 100% compatible, it’s going to come back and bite someone.
--
Pete
Ahem A Rivet's Shot
2023-09-11 19:45:13 UTC
Permalink
On Mon, 11 Sep 2023 12:04:00 -0700
Post by Peter Flass
If you’re not 100% compatible, it’s going to come back and bite someone.
One of my favourite sayings from the early days of the PC clone
business - "IBM compatibles aren't".
--
Steve O'Hara-Smith
Odds and Ends at http://www.sohara.org/
Host: Beautiful Theory meet Inconvenient Fact
Obit: Beautiful Theory died today of factual inconsistency
Scott Lurndal
2023-09-11 20:06:22 UTC
Permalink
Post by Johnny Billquist
Post by Lars Brinkhoff
Post by Johnny Billquist
So there are some incompatibilities between PDP-10 models. Not as many
as for the PDP-11, and it's more in the line of added instructions. But
it still means differences exist. Which a program could even use to
figure out what model of CPU it is running on.
Yes, and there even were such programs in the official DEC
documentations. However, they don't use any instructions are are
undefined on the earlier models, but more minute details.
Oh ho... More differences then. :-)
Like on a PDP-11, where SWAB might set or clear the V flag, depending on
model.
Or how a PDP-8 reacts if you combine RAL and RAR.
So much fun with the weird small details.
If you’re not 100% compatible, it’s going to come back and bite someone.
It's not as easy as it sounds.

https://www.masswerk.at/nowgobang/2021/6502-illegal-opcodes

This was more a function of the way ROM (VLIW) worked.
John Levine
2023-09-12 00:54:05 UTC
Permalink
Post by Johnny Billquist
Or how a PDP-8 reacts if you combine RAL and RAR.
So much fun with the weird small details.
If you’re not 100% compatible, it’s going to come back and bite someone.
It's more subtle than that. A well defined architecture says what has
to be compatible and what can vary from one model to another. Good
software only depends on the copatible part, or has explicit
dependencies on the parts that aren't so you can rewrite that part for
new models.

S/360 was well defined, Vax was eventually pretty well defined, other
DEC architectures were only defined by their implementation, which as
you noted led to a lot of pain down the road.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Thomas Koenig
2023-09-12 05:26:33 UTC
Permalink
Post by John Levine
S/360 was well defined, Vax was eventually pretty well defined, other
DEC architectures were only defined by their implementation, which as
you noted led to a lot of pain down the road.
Wasn't there an unholy mess about POLY on the VAX?
Peter Flass
2023-09-13 02:52:26 UTC
Permalink
Post by Thomas Koenig
Post by John Levine
S/360 was well defined, Vax was eventually pretty well defined, other
DEC architectures were only defined by their implementation, which as
you noted led to a lot of pain down the road.
Wasn't there an unholy mess about POLY on the VAX?
How the VAX Lost Its POLY (and EMOD and ACB_floating too)

http://simh.trailing-edge.com/docs/vax_poly.pdf
--
Pete
Rich Alderson
2023-09-11 21:38:16 UTC
Permalink
Post by Rich Alderson
I wasn't aware of the incompatibilities among various PDP-11 models, but the
-10s seemed to have areas of incompatibility also. I believe IBM was the
first to define an "architectureâ" independent of any particular
implementation, and then make sure (nearly) all S/360 models conformed to it.
Other than the addition of extended addressing to the KL-10 and KS-10, what
did you have in mind for the PDP-10 systems? Asking for a friend... ;->
I clearly didn't have enough coffee before checking in to a.f.c.

In the PDP-6, KA-10, KI-10, and KL-10, there is a class of instructions devoted
to input-output operations. (All I/O controllers are designed to operate the
same way.)

In the KS-10, I/O instructions are much more broadly defined, to interoperate
with Unibus peripherals (using a slightly extended Unibus which handles 18 bit
data rather than 16 bit).

As it happens, because the XKL Toad-1 uses a completely different bus structure
than any DEC product, the KS-10 model (individually defined I/O instructions)
was used in the Toad, although of course the particular assignments are very
different.

Macro-10/Macro-20 (the DEC standard assembler) has the following code in the
initialization sequence; note that this particular version includes a test for
the XKL Toad-1, a PDP-10 clone:

; ;HERE TO TEST FOR CPU AND SET VALUE IN .CPU.
;PDP-6 = 1
;KA-10 = 2
;KI-10 = 3
;KL-10 = 4
;XKL = 5
MOVEI V,1 ;START WITH PDP-6

JFCL 1,.+1 ;CLEAR PC CHANGE FLAG
JRST .+1 ;THEN CHANGE PC
JFCL 1,.PDP6. ;IF FLAG ON, ITS A PDP6

HRLOI 1,-2 ;CHECK FOR KA-10
AOBJP 1,.KA10. ;CHECK CARRY BETWEEN HALVES

SETZ 1, ;CLEAR AC
BLT 1,0 ;AND TRY BLT, KI WILL BE 0 AND
JUMPE 1,.KI10. ;KL WILL HAVE 1,,1

MOVSI 1,400000 ;Largest negative number
ADJBP 1,[430100,,0] ;Check if this changes
CAMN 1,[430100,,0] ;If it doesn't we have a KL10
JRST .KL10.

MOVSI 1,450000 ;A one-word global byte pointer
IBP 1 ;The KS doesn't have OWGBPs
CAMN 1,[450000,,0]
JRST .KS10. ;So the KS won't change it

; JRST .XKL1. ;but the Toad will

.XKL1.: AOS V
.KS10.: AOS V
.KL10.: AOS V
.KI10.: AOS V
.KA10.: AOS V
.PDP6.: MOVEM V,CPUV ;[775] SAVE IT FOR CORE SIZE TYPEOUT
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
Peter Flass
2023-09-12 01:40:41 UTC
Permalink
Post by Rich Alderson
Post by Rich Alderson
I wasn't aware of the incompatibilities among various PDP-11 models, but the
-10s seemed to have areas of incompatibility also. I believe IBM was the
first to define an "architectureв" independent of any particular
implementation, and then make sure (nearly) all S/360 models conformed to it.
Other than the addition of extended addressing to the KL-10 and KS-10, what
did you have in mind for the PDP-10 systems? Asking for a friend... ;->
I clearly didn't have enough coffee before checking in to a.f.c.
In the PDP-6, KA-10, KI-10, and KL-10, there is a class of instructions devoted
to input-output operations. (All I/O controllers are designed to operate the
same way.)
In the KS-10, I/O instructions are much more broadly defined, to interoperate
with Unibus peripherals (using a slightly extended Unibus which handles 18 bit
data rather than 16 bit).
As it happens, because the XKL Toad-1 uses a completely different bus structure
than any DEC product, the KS-10 model (individually defined I/O instructions)
was used in the Toad, although of course the particular assignments are very
different.
Macro-10/Macro-20 (the DEC standard assembler) has the following code in the
initialization sequence; note that this particular version includes a test for
Ah, the trouble that can be avoided by a small ROM.
Post by Rich Alderson
; ;HERE TO TEST FOR CPU AND SET VALUE IN .CPU.
;PDP-6 = 1
;KA-10 = 2
;KI-10 = 3
;KL-10 = 4
;XKL = 5
MOVEI V,1 ;START WITH PDP-6
JFCL 1,.+1 ;CLEAR PC CHANGE FLAG
JRST .+1 ;THEN CHANGE PC
JFCL 1,.PDP6. ;IF FLAG ON, ITS A PDP6
HRLOI 1,-2 ;CHECK FOR KA-10
AOBJP 1,.KA10. ;CHECK CARRY BETWEEN HALVES
SETZ 1, ;CLEAR AC
BLT 1,0 ;AND TRY BLT, KI WILL BE 0 AND
JUMPE 1,.KI10. ;KL WILL HAVE 1,,1
MOVSI 1,400000 ;Largest negative number
ADJBP 1,[430100,,0] ;Check if this changes
CAMN 1,[430100,,0] ;If it doesn't we have a KL10
JRST .KL10.
MOVSI 1,450000 ;A one-word global byte pointer
IBP 1 ;The KS doesn't have OWGBPs
CAMN 1,[450000,,0]
JRST .KS10. ;So the KS won't change it
; JRST .XKL1. ;but the Toad will
.XKL1.: AOS V
.KS10.: AOS V
.KL10.: AOS V
.KI10.: AOS V
.KA10.: AOS V
.PDP6.: MOVEM V,CPUV ;[775] SAVE IT FOR CORE SIZE TYPEOUT
--
Pete
Scott Lurndal
2023-08-31 14:34:55 UTC
Permalink
Post by Elijah Stone
I am looking for information on the history and development of ISAs as a
- Why were they invented? (IBM wanted to sell their customers hardware
upgrades without requiring that they rewrite their code?)
- What was early sentiment towards them like? (From users, from
compiler-writers, from other CPU designers...)
- What was their adoption story like? How long did it take before they were
basically totally ubiquitous, as today?
- What were the _social_ factors that caused GPUs to expose primarily
higher-level interfaces, leaving the lower-level ones unstable and mostly
unused, unlike CPUs?
Any pointers?
comp.arch may be a better usenet newsgroup for this topic.

Generally your inquiry is very, very wide-ranging, and must start
with the earliest digital computers.

As a start, in early machines (late 50's), the term 'instruction'
hadn't yet become common - they used the term 'order' instead. Even
into the 90's, the processor instruction verification suite at
Burroughs was called "all-orders".
Ahem A Rivet's Shot
2023-08-31 15:59:39 UTC
Permalink
On Thu, 31 Aug 2023 14:34:55 GMT
Post by Scott Lurndal
As a start, in early machines (late 50's), the term 'instruction'
hadn't yet become common - they used the term 'order' instead. Even
into the 90's, the processor instruction verification suite at
Burroughs was called "all-orders".
The French word for computer is ordinateur.
--
Steve O'Hara-Smith
Odds and Ends at http://www.sohara.org/
Host: Beautiful Theory meet Inconvenient Fact
Obit: Beautiful Theory died today of factual inconsistency
John Levine
2023-09-02 02:35:11 UTC
Permalink
Post by Elijah Stone
I am looking for information on the history and development of ISAs as a
- Why were they invented? (IBM wanted to sell their customers hardware
upgrades without requiring that they rewrite their code?)
As others have noted, it sounds like you mean computer architecture, not ISA.

In the early 1950s it was a miracle when computers worked at all, and
programs had to be written to work around hardware bugs and
idiosyncracies. I heard of a machine where you couldn't have too many
"1" bits in any word because the voltage in the Williams tubes used
for main memory wasn't strong enough.

I'd say the IBM 704 in 1954 was the first computer that really worked.
It had core memory so you didn't have to worry about bit patterns and
a conservative tube design that would run programs reliably for hours
at a time which in that era was a big deal. It also had hardware
floating point and high quality peripheral devices adapted from IBMs
card equipment. People wrote a lot of software for it, including
the first Fortran compiler and a lot of Fortran programs. The next
machine, the 709, had a superset of the 704's ISA so it could run the
same software, the 7090 was a faster transistor version of the 709, and
so forth. They also had some decimal business machines starting with
the 702 and 705. One time they came out with a new business machine
which was faster and better but incompatible with the previous ones,
the customers said Nope, and they quickly came up with a compatible
upgrade. They also had a small business line, the 1400, and small
scientific 1620.

By 1960 it was well established that software was a big investment,
and it was important that new compters run existing software, and it
was increasingly expensive for computer makers to maintain multiple
product lines so everyone could upgrade. IBM then bet the company on
System 360, which was the first design where they started with the
archictecture and then built multiple implementaions that ran faster
or slower and could have larger or smaller memory. This is the classic
article:

https://www.researchgate.net/publication/220498837_Architecture_of_the_IBM_System360

The 360 wasn't compatible with any of the older machines but they made
a bet, which turned out to be a good one, that they could get people
to convert once in return for the promise that the 360 architecture
would last forever so they wouldn't have to convert again even as they
moved to larger or smaller models. (This was true, modern zSeries
mainframes still run S/360 code.) They also hedged their bets a
little; all but the fastest 360s were microprogrammed, and they
offered microcode compatibility packages. If, for example, you had a
360/65, you could boot it up in 7094 mode and it would run 70x code
faster than any real 70x machine until you finished rewriting or
recompiling your programs into 360 code.

The plan was also that there would be one operating system and one set
of applications but that failed because the OS/360 operating system
turned out to be way too big to run on the smaller machines. So they
quickly came up with cut down systems DOS and TOS (Disk and Tape) and
very simple BOS (Basic). That turned out to be OK, too. OS and DOS are
still around in greatly evolved form, and still run OS and DOS code
from the 1960s.

The 360 was such a great success that we now often forget how
revolutionary it was. It was not only the first architecture intended
from the outset to have multiple implmementations, but it was also the
first one with 8 bit bytes and the kind of byte addressing that is now
used everywhere, and the first popular machine with a large regular
set of registers.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Kerr-Mudd, John
2023-09-02 08:53:51 UTC
Permalink
On Sat, 2 Sep 2023 02:35:11 -0000 (UTC)
Post by John Levine
Post by Elijah Stone
I am looking for information on the history and development of ISAs as a
- Why were they invented? (IBM wanted to sell their customers hardware
upgrades without requiring that they rewrite their code?)
As others have noted, it sounds like you mean computer architecture, not ISA.
In the early 1950s it was a miracle when computers worked at all, and
programs had to be written to work around hardware bugs and
idiosyncracies. I heard of a machine where you couldn't have too many
"1" bits in any word because the voltage in the Williams tubes used
for main memory wasn't strong enough.
I'd say the IBM 704 in 1954 was the first computer that really worked.
It had core memory so you didn't have to worry about bit patterns and
a conservative tube design that would run programs reliably for hours
at a time which in that era was a big deal. It also had hardware
floating point and high quality peripheral devices adapted from IBMs
card equipment. People wrote a lot of software for it, including
the first Fortran compiler and a lot of Fortran programs. The next
machine, the 709, had a superset of the 704's ISA so it could run the
same software, the 7090 was a faster transistor version of the 709, and
so forth. They also had some decimal business machines starting with
the 702 and 705. One time they came out with a new business machine
which was faster and better but incompatible with the previous ones,
the customers said Nope, and they quickly came up with a compatible
upgrade. They also had a small business line, the 1400, and small
scientific 1620.
By 1960 it was well established that software was a big investment,
and it was important that new compters run existing software, and it
was increasingly expensive for computer makers to maintain multiple
product lines so everyone could upgrade. IBM then bet the company on
System 360, which was the first design where they started with the
archictecture and then built multiple implementaions that ran faster
or slower and could have larger or smaller memory. This is the classic
https://www.researchgate.net/publication/220498837_Architecture_of_the_IBM_System360
The 360 wasn't compatible with any of the older machines but they made
a bet, which turned out to be a good one, that they could get people
to convert once in return for the promise that the 360 architecture
would last forever so they wouldn't have to convert again even as they
moved to larger or smaller models. (This was true, modern zSeries
mainframes still run S/360 code.) They also hedged their bets a
little; all but the fastest 360s were microprogrammed, and they
offered microcode compatibility packages. If, for example, you had a
360/65, you could boot it up in 7094 mode and it would run 70x code
faster than any real 70x machine until you finished rewriting or
recompiling your programs into 360 code.
The plan was also that there would be one operating system and one set
of applications but that failed because the OS/360 operating system
turned out to be way too big to run on the smaller machines. So they
quickly came up with cut down systems DOS and TOS (Disk and Tape) and
very simple BOS (Basic). That turned out to be OK, too. OS and DOS are
still around in greatly evolved form, and still run OS and DOS code
from the 1960s.
The 360 was such a great success that we now often forget how
revolutionary it was. It was not only the first architecture intended
from the outset to have multiple implmementations, but it was also the
first one with 8 bit bytes and the kind of byte addressing that is now
used everywhere, and the first popular machine with a large regular
set of registers.
I'm sure the Wheelers will be along shortly to tell of their involvement.
- a Big Thing that you omitted about the 360 was the invention?/use of VM.
Post by John Levine
--
Regards,
Please consider the environment before reading this e-mail. https://jl.ly
--
Bah, and indeed Humbug.
Kerr-Mudd, John
2023-09-02 08:57:51 UTC
Permalink
On Sat, 2 Sep 2023 09:53:51 +0100
"Kerr-Mudd, John" <***@127.0.0.1> wrote:

[]
Post by Kerr-Mudd, John
I'm sure the Wheelers will be along shortly to tell of their involvement.
- a Big Thing that you omitted about the 360 was the invention?/use of VM.
Oh dear, Lynn Wheeler last posted back in April, without Anne.
--
Bah, and indeed Humbug.
Ahem A Rivet's Shot
2023-09-04 06:36:18 UTC
Permalink
On Sat, 2 Sep 2023 09:57:51 +0100
Post by Kerr-Mudd, John
On Sat, 2 Sep 2023 09:53:51 +0100
Oh dear, Lynn Wheeler last posted back in April, without Anne.
Indeed I had been noticing Lynn's absence.
--
Steve O'Hara-Smith
Odds and Ends at http://www.sohara.org/
Host: Beautiful Theory meet Inconvenient Fact
Obit: Beautiful Theory died today of factual inconsistency
John Levine
2023-09-02 17:38:49 UTC
Permalink
Post by Kerr-Mudd, John
Post by John Levine
The 360 was such a great success that we now often forget how
revolutionary it was. It was not only the first architecture intended
from the outset to have multiple implmementations, but it was also the
first one with 8 bit bytes and the kind of byte addressing that is now
used everywhere, and the first popular machine with a large regular
set of registers.
I'm sure the Wheelers will be along shortly to tell of their involvement.
- a Big Thing that you omitted about the 360 was the invention?/use of VM.
Virtualization was an benefit of the 360's well defined architecture,
probably an accidental one. There was a clear spec for everything from
how you boostrap, er, IPL your system to how you send commands to I/O
devices, how they send replies, and how they interrupt. So they
implemented that spec in software and whaddya know, it worked.

By the way, on Lynn's web site it says he's been posting like crazy on
Facebook in the past month, but I haven't checked.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Lynn Wheeler
2023-09-03 02:12:56 UTC
Permalink
Post by John Levine
By the way, on Lynn's web site it says he's been posting like crazy on
Facebook in the past month, but I haven't checked.
well, lots of (facebook) 360, 360/30, 360/65, 360/mp, 360/67 (& couple
other things) ... some repeated in different groups
http://www.garlic.com/~lynn/2023e.html
--
virtualization experience starting Jan1968, online at home since Mar1970
Thomas Koenig
2023-09-04 11:02:23 UTC
Permalink
Post by John Levine
The 360 was such a great success that we now often forget how
revolutionary it was. It was not only the first architecture intended
from the outset to have multiple implmementations, but it was also the
first one with 8 bit bytes and the kind of byte addressing that is now
used everywhere, and the first popular machine with a large regular
set of registers.
The PDP-6 had something close to 16 registers in its lowest 16
words of memory, but I guess it is possible to argue that it
wasn't very popular (but the PDP-10 later was), and that these
memory locations were not really registers; if I read correctly,
it was possible to put code into them and run it.
Johnny Billquist
2023-09-04 13:43:08 UTC
Permalink
Post by Thomas Koenig
Post by John Levine
The 360 was such a great success that we now often forget how
revolutionary it was. It was not only the first architecture intended
from the outset to have multiple implmementations, but it was also the
first one with 8 bit bytes and the kind of byte addressing that is now
used everywhere, and the first popular machine with a large regular
set of registers.
The PDP-6 had something close to 16 registers in its lowest 16
words of memory, but I guess it is possible to argue that it
wasn't very popular (but the PDP-10 later was), and that these
memory locations were not really registers; if I read correctly,
it was possible to put code into them and run it.
Good points.
Definitely 16 registers, and yes, I would definitely call them
registers. The fact that they exist in memory space isn't that
important, nor that you could run code in them. They were treated
specially by the instructions.

And the PDP-6 was before the S/360. But it's also worth noting that the
PDP-6 (and descendants) are a computer architecture that have now been
dead for over 30 years.

The PDP-11 (at least some models) have the exact same behavior with the
registers. The CPU registers do exist in memory space, and you can run
code in them.

Johnny
Vir Campestris
2023-09-04 15:36:54 UTC
Permalink
Post by Johnny Billquist
Good points.
Definitely 16 registers, and yes, I would definitely call them
registers. The fact that they exist in memory space isn't that
important, nor that you could run code in them. They were treated
specially by the instructions.
And the PDP-6 was before the S/360. But it's also worth noting that the
PDP-6 (and descendants) are a computer architecture that have now been
dead for over 30 years.
The PDP-11 (at least some models) have the exact same behavior with the
registers. The CPU registers do exist in memory space, and you can run
code in them.
I've always assumed that the registers were _saved_ into memory space
when the process was stopped.

The main point on a PDP10 of putting code into the registers was that it
went a lot faster - no memory access required.

Andy
Ahem A Rivet's Shot
2023-09-04 16:58:46 UTC
Permalink
On Mon, 4 Sep 2023 16:36:54 +0100
Post by Vir Campestris
The main point on a PDP10 of putting code into the registers was that it
went a lot faster - no memory access required.
The TI 990/9900 series of machines had the general purpose registers
in memory pointed to by a CPU based register - context switching simply
involved pointing at a different bit of memory to get a new set of
registers. It made for very fast context switching and slow register
operations - an interesting trade off.
--
Steve O'Hara-Smith
Odds and Ends at http://www.sohara.org/
Host: Beautiful Theory meet Inconvenient Fact
Obit: Beautiful Theory died today of factual inconsistency
Thomas Koenig
2023-09-04 17:09:29 UTC
Permalink
Post by Ahem A Rivet's Shot
On Mon, 4 Sep 2023 16:36:54 +0100
Post by Vir Campestris
The main point on a PDP10 of putting code into the registers was that it
went a lot faster - no memory access required.
The TI 990/9900 series of machines had the general purpose registers
in memory pointed to by a CPU based register - context switching simply
involved pointing at a different bit of memory to get a new set of
registers. It made for very fast context switching and slow register
operations - an interesting trade off.
Or subroutine calls...

In the age of the 6502, when memory ran twice as fast as the CPU, this
was feasible. Not so much nowadays, where you can wait 150 cycles for
a main memory access.
Ahem A Rivet's Shot
2023-09-04 17:50:20 UTC
Permalink
On Mon, 4 Sep 2023 17:09:29 -0000 (UTC)
Post by Thomas Koenig
Post by Ahem A Rivet's Shot
On Mon, 4 Sep 2023 16:36:54 +0100
Post by Vir Campestris
The main point on a PDP10 of putting code into the registers was that
it went a lot faster - no memory access required.
The TI 990/9900 series of machines had the general purpose
registers in memory pointed to by a CPU based register - context
switching simply involved pointing at a different bit of memory to get
a new set of registers. It made for very fast context switching and
slow register operations - an interesting trade off.
Or subroutine calls...
In the age of the 6502, when memory ran twice as fast as the CPU, this
was feasible. Not so much nowadays, where you can wait 150 cycles for
a main memory access.
I'd think L1 cache would work pretty well for the purpose and these
days there's more of that than main memory on many a 6502 based system.
--
Steve O'Hara-Smith
Odds and Ends at http://www.sohara.org/
Host: Beautiful Theory meet Inconvenient Fact
Obit: Beautiful Theory died today of factual inconsistency
John Dallman
2023-09-04 22:19:00 UTC
Permalink
Post by Ahem A Rivet's Shot
The TI 990/9900 series of machines had the general purpose
registers in memory pointed to by a CPU based register -
context switching simply involved pointing at a different
bit of memory to get a new set of registers. It made for
very fast context switching and slow register operations
- an interesting trade off.
They were originally designed as industrial controllers, for uses where
the computation load was pretty light, but response to interrupts had to
be fast. They had 256 "ordinary" interrupt levels, plus 8
special-priority levels.

John
Peter Flass
2023-09-06 03:10:21 UTC
Permalink
Post by Ahem A Rivet's Shot
On Mon, 4 Sep 2023 16:36:54 +0100
Post by Vir Campestris
The main point on a PDP10 of putting code into the registers was that it
went a lot faster - no memory access required.
The TI 990/9900 series of machines had the general purpose registers
in memory pointed to by a CPU based register - context switching simply
involved pointing at a different bit of memory to get a new set of
registers. It made for very fast context switching and slow register
operations - an interesting trade off.
GE 400 did this, but only with its one register.
--
Pete
Johnny Billquist
2023-09-04 17:25:35 UTC
Permalink
Post by Vir Campestris
Post by Johnny Billquist
Good points.
Definitely 16 registers, and yes, I would definitely call them
registers. The fact that they exist in memory space isn't that
important, nor that you could run code in them. They were treated
specially by the instructions.
And the PDP-6 was before the S/360. But it's also worth noting that
the PDP-6 (and descendants) are a computer architecture that have now
been dead for over 30 years.
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I've always assumed that the registers were _saved_ into memory space
when the process was stopped.
The main point on a PDP10 of putting code into the registers was that it
went a lot faster - no memory access required.
As far as I know/can remember, there was the fast register option for
the KI10 (or was it KA?) which actually sat on top of the normal memory,
meaning the first 16 words of main memory was never used. And instead
you had some really fast memory there.

But if you didn't have it, then normal memory was used. With the KL and
KS I wouldn't know for sure. There is no obvious easy way to tell, I
think. But for the machine that had the fastmem option, it was fairly
obvious.

Johnny
Johnny Billquist
2023-09-04 17:31:50 UTC
Permalink
Post by Johnny Billquist
Post by Vir Campestris
Post by Johnny Billquist
Good points.
Definitely 16 registers, and yes, I would definitely call them
registers. The fact that they exist in memory space isn't that
important, nor that you could run code in them. They were treated
specially by the instructions.
And the PDP-6 was before the S/360. But it's also worth noting that
the PDP-6 (and descendants) are a computer architecture that have now
been dead for over 30 years.
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you
can run code in them.
I've always assumed that the registers were _saved_ into memory space
when the process was stopped.
The main point on a PDP10 of putting code into the registers was that
it went a lot faster - no memory access required.
As far as I know/can remember, there was the fast register option for
the KI10 (or was it KA?) which actually sat on top of the normal memory,
meaning the first 16 words of main memory was never used. And instead
you had some really fast memory there.
But if you didn't have it, then normal memory was used. With the KL and
KS I wouldn't know for sure. There is no obvious easy way to tell, I
think. But for the machine that had the fastmem option, it was fairly
obvious.
According to Wikipedia it was the KA10 that had the registers in memory,
and a fast register hardware option. All later models had the registers
in the CPU.

I guess that might imply that the PDP-6 also had the registers in main
memory.

Johnny
Lars Brinkhoff
2023-09-07 04:54:53 UTC
Permalink
Post by Johnny Billquist
I guess that might imply that the PDP-6 also had the registers in main
memory.
There was a "fast memory" option to implement the accumulator storage
with flip-flops rather than core. I'm not sure about the details; it might have
been the first device on the memory bus.
Bob Eager
2023-09-04 20:55:07 UTC
Permalink
Post by Vir Campestris
Post by Johnny Billquist
Good points.
Definitely 16 registers, and yes, I would definitely call them
registers. The fact that they exist in memory space isn't that
important, nor that you could run code in them. They were treated
specially by the instructions.
And the PDP-6 was before the S/360. But it's also worth noting that the
PDP-6 (and descendants) are a computer architecture that have now been
dead for over 30 years.
The PDP-11 (at least some models) have the exact same behavior with the
registers. The CPU registers do exist in memory space, and you can run
code in them.
I've always assumed that the registers were _saved_ into memory space
when the process was stopped.
My understanding is that they were special-cased by the hardware. They
were real registers, but reference to addresses 0-15 got the register
instead of memory. Also, that low end models didn't have actual registers,
just memory.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Joe Pfeiffer
2023-09-06 15:49:29 UTC
Permalink
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
Peter Flass
2023-09-06 18:35:01 UTC
Permalink
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I do think Xerox/SDS Sigma systems had this. The IBM 1130 had its index
registers in core (not backing real registers, although the 1800 did have
real registers too).
--
Pete
Joe Pfeiffer
2023-09-07 00:59:35 UTC
Permalink
Post by Peter Flass
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I do think Xerox/SDS Sigma systems had this. The IBM 1130 had its index
registers in core (not backing real registers, although the 1800 did have
real registers too).
Yes, there were several other computer families that did it. It's
specifically PDP 11 where I haven't heard of it before.
Scott Lurndal
2023-09-07 13:14:30 UTC
Permalink
Post by Joe Pfeiffer
Post by Peter Flass
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I do think Xerox/SDS Sigma systems had this. The IBM 1130 had its index
registers in core (not backing real registers, although the 1800 did have
real registers too).
Yes, there were several other computer families that did it. It's
specifically PDP 11 where I haven't heard of it before.
Indeed, the Burroughs BCD machines also had their index registers
in low memory.
Charlie Gibbs
2023-09-07 18:18:31 UTC
Permalink
Post by Scott Lurndal
Post by Joe Pfeiffer
Post by Peter Flass
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I do think Xerox/SDS Sigma systems had this. The IBM 1130 had its index
registers in core (not backing real registers, although the 1800 did have
real registers too).
Yes, there were several other computer families that did it. It's
specifically PDP 11 where I haven't heard of it before.
Indeed, the Burroughs BCD machines also had their index registers
in low memory.
The Univac 9300 (their answer to the IBM 360/20) also stored its
registers in low memory. Two sets, actually, to support its primitive
version of problem and supervisor states. Its equivalent of the 360's
PSW was also in low memory. I had fun writing little programs that
did strange things, e.g. fill memory with an instruction that decremented
the program counter by 8, jump to the last one, and execute the program
backwards.
--
/~\ Charlie Gibbs | They offer a huge range of
\ / <***@kltpzyxm.invalid> | world-class vulnerabilities
X I'm really at ac.dekanfrus | that only Microsoft can provide.
/ \ if you read it the right way. | -- druck
Bob Eager
2023-09-07 20:41:02 UTC
Permalink
Post by Joe Pfeiffer
Post by Peter Flass
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you
can run code in them.
I don't remember ever coming across this -- which models?
I do think Xerox/SDS Sigma systems had this. The IBM 1130 had its
index registers in core (not backing real registers, although the 1800
did have real registers too).
Yes, there were several other computer families that did it. It's
specifically PDP 11 where I haven't heard of it before.
Indeed, the Burroughs BCD machines also had their index registers in low
memory.
And the ICT 1900 (physically on the lower end of the range).
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Johnny Billquist
2023-09-07 09:57:47 UTC
Permalink
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I won't dare trying to list them all. I know that the 11/70 have it,
which implies that the 11/45, 11/50 and 11/55 also do. My guess is that
all PDP-11s, except those that have a serial console built into the CPU.
So anything except the F11 and J11 based. Possibly also T11.

General register set 0 is at 17777700 to 17777707. General register set
1 is at 17777710 to 17777717. The weird/funny thing is that when you
access these addresses on the front panel, addresses actually increment
by 1, and not the usual 2. But they are still 16 bit data on each
address (obviously, since they are the CPU registers).

It's all documented in the processor handbook. Some PDP-11 models allow
you to access the general registers in the I/O page, but do not allow
you to run from them, while others do. And some models to not have the
registers visible there at all. So it is rather model dependent.

This is a rather obscure piece of knowledge though, and I'm not
surprised most people are not aware. But think about it - if you have a
PDP-11 with a front panel, this is the only way to find out what content
you have in your CPU registers...

Johnny
Joe Pfeiffer
2023-09-07 21:00:21 UTC
Permalink
Post by Johnny Billquist
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I won't dare trying to list them all. I know that the 11/70 have it,
which implies that the 11/45, 11/50 and 11/55 also do. My guess is
that all PDP-11s, except those that have a serial console built into
the CPU. So anything except the F11 and J11 based. Possibly also T11.
General register set 0 is at 17777700 to 17777707. General register
set 1 is at 17777710 to 17777717. The weird/funny thing is that when
you access these addresses on the front panel, addresses actually
increment by 1, and not the usual 2. But they are still 16 bit data on
each address (obviously, since they are the CPU registers).
Yes, I was just able to verify this. Cool!
John Levine
2023-09-09 20:41:01 UTC
Permalink
Post by Johnny Billquist
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I won't dare trying to list them all. I know that the 11/70 have it,
which implies that the 11/45, 11/50 and 11/55 also do.
I did a lot of programming on an an 11/45 and I can assure you the
registers were addressable as memory, at least not in any normal way.
Post by Johnny Billquist
General register set 0 is at 17777700 to 17777707. General register set
1 is at 17777710 to 17777717.
That was for debugging. It wasn't useful in normal code.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
John Levine
2023-09-09 20:41:29 UTC
Permalink
Post by John Levine
Post by Johnny Billquist
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I won't dare trying to list them all. I know that the 11/70 have it,
which implies that the 11/45, 11/50 and 11/55 also do.
I did a lot of programming on an an 11/45 and I can assure you the
registers were addressable as memory, at least not in any normal way.
sigh. were NOT addressable ...
Post by John Levine
Post by Johnny Billquist
General register set 0 is at 17777700 to 17777707. General register set
1 is at 17777710 to 17777717.
That was for debugging. It wasn't useful in normal code.
--
Regards,
Please consider the environment before reading this e-mail. https://jl.ly
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Johnny Billquist
2023-09-10 00:27:52 UTC
Permalink
Post by John Levine
Post by John Levine
Post by Johnny Billquist
Post by Joe Pfeiffer
Post by Johnny Billquist
The PDP-11 (at least some models) have the exact same behavior with
the registers. The CPU registers do exist in memory space, and you can
run code in them.
I don't remember ever coming across this -- which models?
I won't dare trying to list them all. I know that the 11/70 have it,
which implies that the 11/45, 11/50 and 11/55 also do.
I did a lot of programming on an an 11/45 and I can assure you the
registers were addressable as memory, at least not in any normal way.
sigh. were NOT addressable ...
I was reading up on some other things, and came across that for the
KB11, they registers are only accessible on those addresses on the front
panel, but not from the CPU.

There are, however, some CPUs where they were also accessible from the
CPU. There are so many variations on the PDP-11...

Johnny
John Levine
2023-09-09 20:55:15 UTC
Permalink
Post by Thomas Koenig
Post by John Levine
The 360 was such a great success that we now often forget how
revolutionary it was. It was not only the first architecture intended
from the outset to have multiple implmementations, but it was also the
first one with 8 bit bytes and the kind of byte addressing that is now
used everywhere, and the first popular machine with a large regular
set of registers.
The PDP-6 had something close to 16 registers in its lowest 16
words of memory, but I guess it is possible to argue that it
wasn't very popular ...
I actually programmed a PDP-6 when I was in high school.

The first 16 memory locations were the registers. On the PDP-6 and
KA-10, transistor registers were optional, otherwise stored in the
first 16 locations of core, but everyone bought the fast registers so
they became standard on the KI-10.

The PDP-6 was a technical triumph but a commercial failure. They
used large boards with unreliable connectors making the machine
flaky. DEC only made 20 of them, and cancelled the product. It's
not entirely clear to me why they gave the -10 another try but it
was clearly a good move.

S/360 and the PDP-6 were both announced in 1964, so they obviously
both had been designed some time before, as far as I know without
either having knowledge of the other. The first 360s were shipped in
1965, so DEC probably shipped a PDP-6 before IBM shipped a 360, but I
would say they were simultaneous, and the -6 definitely was not
popular, even though it was well loved by its friends.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Rich Alderson
2023-09-09 22:39:35 UTC
Permalink
Post by John Levine
The 360 was such a great success that we now often forget how revolutionary
it was. It was not only the first architecture intended from the outset to
have multiple implmementations, but it was also the first one with 8 bit
bytes and the kind of byte addressing that is now used everywhere, and the
first popular machine with a large regular set of registers.
The PDP-6 had something close to 16 registers in its lowest 16 words of
memory, but I guess it is possible to argue that it wasn't very popular ...
I actually programmed a PDP-6 when I was in high school.
The first 16 memory locations were the registers. On the PDP-6 and KA-10,
transistor registers were optional, otherwise stored in the first 16
locations of core, but everyone bought the fast registers so they became
standard on the KI-10.
The PDP-6 was a technical triumph but a commercial failure. They used large
boards with unreliable connectors making the machine flaky. DEC only made 20
of them, and cancelled the product. It's not entirely clear to me why they
gave the -10 another try but it was clearly a good move.
The PDP-6 was a technical triumph, meaning that those customers who purchased a
PPD-6 were very happy with the capabilities of the system even if they were not
particularly happy with the implementation (which was an expansion on the
System Module(TM) technology used successfully in the PDP-1, PDP-4, and PDP-5).

The PDP-10 was designed using the new technology introduced with the PDP-7, the
FlipChip(TM), and used to good effect in the PDP-8 and PDP-9. The 36 bit
architecture redesign was a skunkworks project, hidden from Ken Olsen until it
was ready to go; the initial product line included three models (the 10/10,
10/20, and 10/30) which were nominally single user systems, to make KO happy,
as well as a nonswapping (10/40) and a swapping (10/50) multiuser model.

It is reported that someone did order a 10/30.[1]
Post by John Levine
S/360 and the PDP-6 were both announced in 1964, so they obviously both had
been designed some time before, as far as I know without either having
knowledge of the other. The first 360s were shipped in 1965, so DEC probably
shipped a PDP-6 before IBM shipped a 360, but I would say they were
simultaneous, and the -6 definitely was not popular, even though it was well
loved by its friends.
The System/360 family were announced in April, 1964, with first customer ship
in October, 1965.

The PDP-6 was announced three weeks before, in March, 1964, with first customer
ship in June, 1964.

A lot less vapor in the PDP-6 than in the S/360, don't you think?

[1] I'd love a PDP-10/30: 32KW memory, paper tape and DECtape, Teletype console.
--
Rich Alderson ***@alderson.users.panix.com
Audendum est, et veritas investiganda; quam etiamsi non assequamur,
omnino tamen proprius, quam nunc sumus, ad eam perveniemus.
--Galen
John Levine
2023-09-10 01:57:38 UTC
Permalink
Post by Rich Alderson
The PDP-10 was designed using the new technology introduced with the PDP-7, the
FlipChip(TM), and used to good effect in the PDP-8 and PDP-9. The 36 bit
architecture redesign was a skunkworks project, hidden from Ken Olsen until it
was ready to go; the initial product line included three models (the 10/10,
10/20, and 10/30) which were nominally single user systems, to make KO happy,
as well as a nonswapping (10/40) and a swapping (10/50) multiuser model.
It is reported that someone did order a 10/30.[1]
I briefly used a PDP-10 that was used to monitor some kind of high
energy physics experiments at Princeton. I'd think that for some kinds
of realtime stuff it'd make sense, large word size, hardware floating
point, fast interrupts, and a simple hardware interface.
Post by Rich Alderson
The System/360 family were announced in April, 1964, with first customer ship
in October, 1965.
The PDP-6 was announced three weeks before, in March, 1964, with first customer
ship in June, 1964.
A lot less vapor in the PDP-6 than in the S/360, don't you think?
Not really comparable, the PDP-6 announcement was one machine, S/360
was six, of which they shipped four in 1965. The 360/40 shipped in
April 1965. IBM certainly preannounced to try and keep customers from
jumping ship but the computers were real.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Lynn Wheeler
2023-09-09 23:21:00 UTC
Permalink
Post by John Levine
S/360 and the PDP-6 were both announced in 1964, so they obviously
both had been designed some time before, as far as I know without
either having knowledge of the other. The first 360s were shipped in
1965, so DEC probably shipped a PDP-6 before IBM shipped a 360, but I
would say they were simultaneous, and the -6 definitely was not
popular, even though it was well loved by its friends.
one of the co-workers (at science center) told story that in the gov/IBM
trial ... that BUNCH members testified that by 1959 all realized that
(because of software costs), all realized that compatible architecture
was needed across the product lines ... but IBM was only one with
executives that managed to enforce compatibility (also required
description that all could follow).

account of the end of the ACS/360 (ACS started out incompatibility, but
Amdahl managed to carry the day for compatibility), executives shut it
down because they were afraid it would advance the state-of-the-art too
fast and would loose control of the market
https://people.cs.clemson.edu/~mark/acs_end.html
Amdahl leaves IBM shortly later.

Early 70s, IBM has the "Future System" project (as countermeasure to
clone mainframe i/o controllers), completely different and going to
completely replace 370. Internal politics were shutting down 370 efforts
(claim is the lack of new 370 during the FS period, is credited with
giving 370 clone makers their market foothold, aka attempting failed
countermeasure to clone controllers enabling rise ofclone systems).

Amdahl gave talk in large MIT auditorium shortly after forming his
company. Somebody in the audience asked him what justification for his
company did he use with investors. He said that there was enough
customer 360 software, that even if IBM was going to completely walk
away from 360 ... it was sufficient to keep him in business through the
end of the century (sort of implying he knew about "FS", but in later
years he claimed he knew nothing about FS).

When FS finally implodes, there was mad rush to get stuff back into 370
product pipelines ... including kicking off the quick&dirty 3033&3081
projects in parallel
http://www.jfsowa.com/computer/memo125.htm
https://en.wikipedia.org/wiki/IBM_Future_Systems_project

trivia: decade ago I was asked (in newsgroups) if I could track down the IBM
decision to add virtual memory to all 370s and found staff to executive
making the decision, basically (OS/360) MVT storage management was so
bad that regions frequently had to be specified four times larger than
used, resulting in typical 1mbyte 370/165 only being able to run four
regions concurrently, insufficient to keep processor busy and
justified. Mapping MVT to 16mbyte virtual memory, allowed increasing
concurrently running regions by a factor of four times with little or
no paing (initially MVT->VS2 was little different than running MVT
in CP67 16mbyte virtual machine). Pieces of that email exchange in this
archived (11mar2011 afc) post
http://www.garlic.com/~lynn/2011d.html#73

trivia2: the 360 (& then 370) architecture manual was moved to CMS
SCRIPT (redone from CTSS RUNOFF), command line option either generated
the principles of operation subset or the full architecture manual (with
engineering notes, justifications, alternative implementations for
different models, etc).
http://www.bitsavers.org/pdf/ibm/360/princOps/
http://www.bitsavers.org/pdf/ibm/370/princOps/

trivia3: 370/165 engineers complained that if they had to do the full
370 virtual memory architecture, it would delay virtual memory announce
by six months. Eventually decision was just to do the 165 subset, and
other models (and software) having already done the full architecture
had to drop back to the 165 subset.

other discussion in this (linkedin) post starting with Learson
attempting to block the bureaucrats, careerists, and MBAs destroying the
watson legacy (but failed) ... two decades later, IBM has one of the
largest losses in US company history and was being re-orged into the 13
"baby blues" in preparation for breaking up the company
https://www.linkedin.com/pulse/john-boyd-ibm-wild-ducks-lynn-wheeler/
--
virtualization experience starting Jan1968, online at home since Mar1970
Loading...