Discussion:
Color Palette for DEC PDP families
(too old to reply)
Joseph Ambrose
2012-07-24 23:49:29 UTC
Permalink
Hello Group!!!

I remember seeing this topic mentioned a few years ago, but I can't seem to bring it up on Google.... or any other search engine....

What are the color schemes for the different DEC PDP system families?

Specifically PDP-10, PDP-8, and PDP-11. i'm looking for the Pantone specifications.

Thanks Much!!!

Joe Ambrose
Quadibloc
2012-07-25 00:05:30 UTC
Permalink
Post by Joseph Ambrose
Specifically PDP-10, PDP-8, and PDP-11. i'm looking for the Pantone specifications.
I think I actually saw the Pantone specifications for at least one
model in a document from Bitsavers. Likely the engineering drawings.
There are also maintenance manuals, but they're less likely to have
that level of information.

Had you not required such exacting information, my web page at

http://www.quadibloc.com/comp/pan07.htm

and

http://www.quadibloc.com/comp/pan08.htm

might have sufficed.

John Savard
Joseph Ambrose
2012-07-25 00:27:35 UTC
Permalink
> Specifically PDP-10, PDP-8, and PDP-11. i'm looking for the Pantone specifications.
I think I actually saw the Pantone specifications for at least one
model in a document from Bitsavers. Likely the engineering drawings.
There are also maintenance manuals, but they're less likely to have
that level of information.
Had you not required such exacting information, my web page at
http://www.quadibloc.com/comp/pan07.htm
and
http://www.quadibloc.com/comp/pan08.htm
might have sufficed.
John Savard
John,

Thanks for the links, I'm rather impressed on the quality and accuracy of the diagrams.
Quadibloc
2012-07-25 20:43:48 UTC
Permalink
Post by Joseph Ambrose
Thanks for the links, I'm rather impressed on the quality and accuracy of the diagrams.
In the area of color, though, note that I just picked the closest
match from the set of 216 "internet-safe" colors for the diagrams. In
one set of diagrams, I went outside that palette, but only to achieve
a 3-D shading effect.

For what you were really looking for, if you find it, you'll probably
have to thank Al Kossow.

John Savard
Peter Flass
2012-07-25 23:13:34 UTC
Permalink
Post by Quadibloc
Post by Joseph Ambrose
Thanks for the links, I'm rather impressed on the quality and accuracy of the diagrams.
In the area of color, though, note that I just picked the closest
match from the set of 216 "internet-safe" colors for the diagrams. In
one set of diagrams, I went outside that palette, but only to achieve
a 3-D shading effect.
For what you were really looking for, if you find it, you'll probably
have to thank Al Kossow.
John Savard
Or go around to computer museums with a gadget that matches colors.
--
Pete
gareth
2012-07-25 17:32:09 UTC
Permalink
Post by Quadibloc
Had you not required such exacting information, my web page at
http://www.quadibloc.com/comp/pan07.htm
and
http://www.quadibloc.com/comp/pan08.htm
might have sufficed.
Many thanks for an interesting web site, John, which I have browsed
extensively.

I beg to differ on one point, and that is having been brought up on the
PDP11 and its little-endian scheme, to me that is the logical way of
preogrssing, because with multi-byte (or multi-word) integer arithmetic
the offset of succeeding byte increases according to the appropriate
power of 2.

Any difficulties perceived in reading off hexadecimal are trivially
covered by a simple program.
Quadibloc
2012-07-25 20:55:28 UTC
Permalink
Post by gareth
I beg to differ on one point, and that is having been brought up on the
PDP11 and its little-endian scheme, to me that is the logical way of
preogrssing, because with multi-byte (or multi-word) integer arithmetic
the offset of succeeding byte increases according to the appropriate
power of 2.
Any difficulties perceived in reading off hexadecimal are trivially
covered by a simple program.
Consistent little-endian operation certainly is logical; but it's
confusing to people who were brought up reading and writing in English
or other languages written from left to right.

When one reads Hebrew or Arabic from right to left, one encounters the
digits of a number in the little-endian fashion, the least significant
one first.

The problem isn't that it's hard to read a core dump. The problem is
that conceptually it's hard to remember where certain things should
be. In the spec for a block cipher, when it's enciphering a sequence
of ASCII characters transmitted on a communications channel, where
does the first character go?

In a big-endian world, the first character automatically goes in the
first part of the binary 64-bit number that is operated on... there's
no need to read over the spec carefully and hunt for the fine print.

Little-endian bit numbering makes sense for circuit boards that attach
a 12-bit A/D converter to a 16-bit bus, for example, and little-endian
storage of multi-word numbers makes multi-precision arithmetic
simpler. Both systems have their advantages.

What I personally think is the decisive advantage in favor of big-
endian, which is why I wish the PDP-11 never introduced the little-
endian alternative, is that it's pedagogically simpler; it lets people
new to computers be taught to understand how computers work at the
machine language hardware level more quickly by removing an
opportunity for confusion.

And big-endian is generally preferable if a computer has a decimal
data type, for the purpose of doing a few short quick arithmetic
computations on numbers that are stored in text form in order to keep
the contents of a data file or database human-readable.

That's why the IBM 360, designed to replace both the scientific 7090
and the commercial 1401, was big-endian. That, and the fact that the
alternative likely wasn't even considered (before the PDP-11, many
computers did store the two words of things like double-precision
integers in little-endian form, but characters within a word were,
AFAIK, always big-endian) as a possibility, of course.

Little-endian isn't bad - it is logical and consistent. But it makes
life more complicated; as the newcomer, it is the one that created a
headache of incompatibility.

John Savard
BGB
2012-07-26 04:29:32 UTC
Permalink
Post by Quadibloc
Post by gareth
I beg to differ on one point, and that is having been brought up on the
PDP11 and its little-endian scheme, to me that is the logical way of
preogrssing, because with multi-byte (or multi-word) integer arithmetic
the offset of succeeding byte increases according to the appropriate
power of 2.
Any difficulties perceived in reading off hexadecimal are trivially
covered by a simple program.
Consistent little-endian operation certainly is logical; but it's
confusing to people who were brought up reading and writing in English
or other languages written from left to right.
When one reads Hebrew or Arabic from right to left, one encounters the
digits of a number in the little-endian fashion, the least significant
one first.
The problem isn't that it's hard to read a core dump. The problem is
that conceptually it's hard to remember where certain things should
be. In the spec for a block cipher, when it's enciphering a sequence
of ASCII characters transmitted on a communications channel, where
does the first character go?
In a big-endian world, the first character automatically goes in the
first part of the binary 64-bit number that is operated on... there's
no need to read over the spec carefully and hunt for the fine print.
Little-endian bit numbering makes sense for circuit boards that attach
a 12-bit A/D converter to a 16-bit bus, for example, and little-endian
storage of multi-word numbers makes multi-precision arithmetic
simpler. Both systems have their advantages.
What I personally think is the decisive advantage in favor of big-
endian, which is why I wish the PDP-11 never introduced the little-
endian alternative, is that it's pedagogically simpler; it lets people
new to computers be taught to understand how computers work at the
machine language hardware level more quickly by removing an
opportunity for confusion.
And big-endian is generally preferable if a computer has a decimal
data type, for the purpose of doing a few short quick arithmetic
computations on numbers that are stored in text form in order to keep
the contents of a data file or database human-readable.
That's why the IBM 360, designed to replace both the scientific 7090
and the commercial 1401, was big-endian. That, and the fact that the
alternative likely wasn't even considered (before the PDP-11, many
computers did store the two words of things like double-precision
integers in little-endian form, but characters within a word were,
AFAIK, always big-endian) as a possibility, of course.
Little-endian isn't bad - it is logical and consistent. But it makes
life more complicated; as the newcomer, it is the one that created a
headache of incompatibility.
well, a person could just as easily transpose how they think about the
numbers...

say, a person writes out a digit stream with the least significant
digits on the left (I actually fairly often do this when thinking about
binary bit-streams).

one can imagine the bits as:
01234567 01234567 ...
with 0 as the LSB, and 7 as the MSB.

granted, it could confuse people if done with decimal or hex numbers
without some clear visual indicator.

but, a person can get fairly used to swapping around the hex digits when
reading values from hex-dumps or similar.


if designing a file format, there are a few options:
just pick one;
maybe consider a numerical representation which makes endianess largely
irrelevant (say, by only having a single plausible ordering).

typically, I use:
LE for multibyte integers;
BE for byte-oriented variable-length-integers (I have a preferred format
here, similar to UTF-8 and the format used in MKV);
LE for bitsteam variable-length-integers (I personally prefer LE bitsteams).

...
Post by Quadibloc
John Savard
Loading...