Discussion:
“A damn stupid thing to do”—the origins of C
(too old to reply)
Peter Flass
2020-12-13 02:46:16 UTC
Permalink
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
--
Pete
Jorgen Grahn
2020-12-13 08:07:26 UTC
Permalink
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
Aka <https://arstechnica.com/features/2020/12/a-damn-stupid-thing-to-do-the-origins-of-c/>

The first half is about CPL, the next 1/4 about BCPL, then B and C at
Bell Labs. I should read it, but I need to wrap christmas gifts
... and solve adventofcode.com/2020/ puzzles in C++.

Thanks!

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Bob Eager
2020-12-13 14:49:10 UTC
Permalink
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).

And I met Ritchie and Thompson once, briefly.

This is all quite worrying as some of them are now dead.
--
Using UNIX since v6 (1975)...

Use the BIG mirror service in the UK:
http://www.mirrorservice.org
Dallas
2020-12-14 19:30:10 UTC
Permalink
Post by Bob Eager
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).
And I met Ritchie and Thompson once, briefly.
This is all quite worrying as some of them are now dead.
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
Dan Espen
2020-12-14 20:00:54 UTC
Permalink
Post by Dallas
Post by Bob Eager
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).
And I met Ritchie and Thompson once, briefly.
This is all quite worrying as some of them are now dead.
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
Sort of gives him an air of authority.
He posted here many times.
--
Dan Espen
Scott Lurndal
2020-12-14 21:13:55 UTC
Permalink
Post by Dan Espen
Post by Dallas
Post by Bob Eager
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).
And I met Ritchie and Thompson once, briefly.
This is all quite worrying as some of them are now dead.
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
Sort of gives him an air of authority.
Technically 'gave' him. http://www.legacy.com/ns/dennis-ritchie-obituary/154063273

I met him, Steven Johnson and a few others once at the USL Summit facility back in the
early 90's.
Dallas
2020-12-14 22:58:24 UTC
Permalink
Post by Dan Espen
Post by Dallas
Post by Bob Eager
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).
And I met Ritchie and Thompson once, briefly.
This is all quite worrying as some of them are now dead.
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
Sort of gives him an air of authority.
He posted here many times.
I took a multi-year hiatus from Usenet.
Went from needing serious consultation on C using comp.lang.c when I was a C programmer to looking
at C as folklore after not using it (or Usenet) for a decade or so.

So I missed my chance to interact with Dennis here.

I would hate to live without a garbage collector these days.
But if I needed to do anything "real-time" ("ring zero" stuff) I think C is still a viable choice
of language.
I liked the standardization documents and the language definition documents that accompanied C.

IIRC the world treated languages differently back then. I don't have an exact description of how
they were different, but it seemed things were more formally specified back then. BNF diagrams
and all.
Peter Flass
2020-12-14 23:23:55 UTC
Permalink
Post by Dallas
Post by Dan Espen
Post by Dallas
Post by Bob Eager
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).
And I met Ritchie and Thompson once, briefly.
This is all quite worrying as some of them are now dead.
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
Sort of gives him an air of authority.
He posted here many times.
I took a multi-year hiatus from Usenet.
Went from needing serious consultation on C using comp.lang.c when I was
a C programmer to looking
at C as folklore after not using it (or Usenet) for a decade or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
But if I needed to do anything "real-time" ("ring zero" stuff) I think C
is still a viable choice
of language.
I liked the standardization documents and the language definition
documents that accompanied C.
IIRC the world treated languages differently back then. I don't have an
exact description of how
they were different, but it seemed things were more formally specified
back then. BNF diagrams
and all.
The PL/I standard is extremely complete and detailed.
--
Pete
Jon Elson
2020-12-16 03:54:20 UTC
Permalink
Post by Peter Flass
Post by Dallas
IIRC the world treated languages differently back then. I don't have an
exact description of how
they were different, but it seemed things were more formally specified
back then. BNF diagrams
and all.
The PL/I standard is extremely complete and detailed.
I have seen the bnf for Pascal, it was not too huge, but certainly spanned a
bunch of pages.

Jon
Peter Flass
2020-12-16 14:19:13 UTC
Permalink
Post by Jon Elson
Post by Peter Flass
Post by Dallas
IIRC the world treated languages differently back then. I don't have an
exact description of how
they were different, but it seemed things were more formally specified
back then. BNF diagrams
and all.
The PL/I standard is extremely complete and detailed.
I have seen the bnf for Pascal, it was not too huge, but certainly spanned a
bunch of pages.
Possibly now languages of the week come and go so fast it’s not worth
writing a detailed spec.
--
Pete
Thomas Koenig
2020-12-16 16:06:42 UTC
Permalink
Post by Peter Flass
Post by Jon Elson
Post by Peter Flass
Post by Dallas
IIRC the world treated languages differently back then. I don't have an
exact description of how
they were different, but it seemed things were more formally specified
back then. BNF diagrams
and all.
The PL/I standard is extremely complete and detailed.
I have seen the bnf for Pascal, it was not too huge, but certainly spanned a
bunch of pages.
Possibly now languages of the week come and go so fast it’s not worth
writing a detailed spec.
A random C++ draft standard from 2018 I just grabbed has 1572 pages.
I guess that counts as "far too detailed" (but then again, C++ has
become completely insane).

Fortran is not a small language at all, but the 2018 standard has 630
pages, which is already large.

It's interesting to read what Kernighan has to say about C++ -
he's a very polite guy, but he writes in his memoir that he is
"barely literate in [C++]". Small wonder.
Scott Lurndal
2020-12-16 17:17:00 UTC
Permalink
Post by Thomas Koenig
Post by Jon Elson
Post by Peter Flass
Post by Dallas
IIRC the world treated languages differently back then. I don't have an
exact description of how
they were different, but it seemed things were more formally specified
back then. BNF diagrams
and all.
The PL/I standard is extremely complete and detailed.
I have seen the bnf for Pascal, it was not too huge, but certainly spanned a
bunch of pages.
Possibly now languages of the week come and go so fast it’s not worth
writing a detailed spec.
A random C++ draft standard from 2018 I just grabbed has 1572 pages.
I guess that counts as "far too detailed" (but then again, C++ has
become completely insane).
The ARM (Acorn Risc Machine) processor documentation for the 8th
generation processors (ARMv8.6) runs to 8248 pages. The Interrupt
controller, I/O Memory management unit and external debug unit
documentation adds an additional 2000+ pages.

There ain't nothing simple any more.
Niklas Karlsson
2020-12-17 13:02:51 UTC
Permalink
Post by Thomas Koenig
It's interesting to read what Kernighan has to say about C++ -
he's a very polite guy, but he writes in his memoir that he is
"barely literate in [C++]". Small wonder.
He also defends at least early C++, saying that Stroustrup had good
reasons for the choices he made. He doesn't say anything like that about
what's happened since, though...

Niklas
--
Just because you *can* write a Java Virtual Machine in INTERCAL, that
doesn't mean you *should*.
-- David Cameron Staples makes the Understatement of the Year in asr
Jorgen Grahn
2020-12-19 18:39:01 UTC
Permalink
Post by Peter Flass
Post by Jon Elson
Post by Peter Flass
Post by Dallas
IIRC the world treated languages differently back then. I don't have an
exact description of how
they were different, but it seemed things were more formally specified
back then. BNF diagrams
and all.
The PL/I standard is extremely complete and detailed.
I have seen the bnf for Pascal, it was not too huge, but certainly spanned a
bunch of pages.
Possibly now languages of the week come and go so fast it’s not worth
writing a detailed spec.
I've assumed it's the difference between these standpoints:

A "I invented this language, but I can't be bothered writing compilers
for every crazy architecture people come up with nowadays. I want
people to use it for a long time. I don't want it to end up like
Pascal ... so here's a spec you can follow to make a compiler, and
a spec you can read if you want to use the language or verify that
your program is correct."

B "I invented this language, and here's a compiler. It's open source,
so you can port it to crazy architectures if you want ... or you
can send patches and see if I accept them. As for correctness, if
your program seems to do useful stuff, isn't that good enough? Try
not to be so negative!

Also, be prepared to fix your program when I change what the
compiler does, which I may do now and them. Software has a
best-before date, you know."

(Replace "compiler" with "interpreter" for most newer languages.)

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
John Levine
2020-12-16 18:25:51 UTC
Permalink
Post by Peter Flass
The PL/I standard is extremely complete and detailed.
It was 400 pages, largely in impenetrable Vienna Definition Language.
The language in the standard was considerably smaller than the IBM
dialect that most people used (still use, I suppose.)

Fortran 77 was about the same size, although it was really only half
that length because each pair of pages had the spec for the full
language on one side and the subset on the other.

The C and Fortran standards have each grown quite a lot, in libraries
and interfaces more than the base languages.
--
Regards,
John Levine, ***@taugh.com, Primary Perpetrator of "The Internet for Dummies",
Please consider the environment before reading this e-mail. https://jl.ly
Jorgen Grahn
2020-12-19 18:26:08 UTC
Permalink
On Mon, 2020-12-14, Dallas wrote:
...
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a decade
or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.

(I am a big fan of the C++ way of doing it, since from my point of
view, it stays in the C tradition, which I like.)
But if I needed to do anything "real-time" ("ring zero" stuff) I
think C is still a viable choice of language.
...

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Ahem A Rivet's Shot
2020-12-19 18:47:35 UTC
Permalink
On 19 Dec 2020 18:26:08 GMT
Post by Jorgen Grahn
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.
Python does https://stackify.com/python-garbage-collection/, C++
doesn't have one built in but there are any number of ones you can use.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
J. Clarke
2020-12-19 19:46:39 UTC
Permalink
Post by Jorgen Grahn
...
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a decade
or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.
What leads you to believe that Python does not use a garbage
collector?
Post by Jorgen Grahn
(I am a big fan of the C++ way of doing it, since from my point of
view, it stays in the C tradition, which I like.)
But if I needed to do anything "real-time" ("ring zero" stuff) I
think C is still a viable choice of language.
...
/Jorgen
Jorgen Grahn
2020-12-20 16:14:11 UTC
Permalink
Post by J. Clarke
Post by Jorgen Grahn
...
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a decade
or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.
What leads you to believe that Python does not use a garbage
collector?
I know it uses reference counting, and I was under the impression that
wasn't classified as a "garbage collector".

The Wikipedia page
<https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)>
says it does ... but then goes on to use my definition when it lists
the old Boehm garbage collector for C and C++, but not the widely
used shared_ptrs, which rely on reference counting. So it's still not
clear to me which definition is correct.

On the other hand, my idea "reference counting is not true garbage
collection" may simply come from the hype around Java in the 1990s.
People were eager to claim that things that already existed in other
languages, weren't good enough.

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
J. Clarke
2020-12-20 16:29:54 UTC
Permalink
Post by Jorgen Grahn
Post by J. Clarke
Post by Jorgen Grahn
...
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a decade
or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.
What leads you to believe that Python does not use a garbage
collector?
I know it uses reference counting, and I was under the impression that
wasn't classified as a "garbage collector".
The Wikipedia page
<https://en.wikipedia.org/wiki/Garbage_collection_(computer_science)>
says it does ... but then goes on to use my definition when it lists
the old Boehm garbage collector for C and C++, but not the widely
used shared_ptrs, which rely on reference counting. So it's still not
clear to me which definition is correct.
On the other hand, my idea "reference counting is not true garbage
collection" may simply come from the hype around Java in the 1990s.
People were eager to claim that things that already existed in other
languages, weren't good enough.
Reference counting is not the only garbage collector in Python though.
There is also a generational garbage collector. While that is usually
transparent to the programmer there is a module that allows it to be
managed if one has a situation in which making adjustments would be
beneficial.
Dallas
2020-12-20 14:57:36 UTC
Permalink
Post by Jorgen Grahn
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a decade
or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.
(I am a big fan of the C++ way of doing it, since from my point of
view, it stays in the C tradition, which I like.)
I have little experience with C++

How does C++ simplify memory management?
Dan Espen
2020-12-20 15:43:43 UTC
Permalink
Post by Dallas
Post by Jorgen Grahn
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a decade
or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.
(I am a big fan of the C++ way of doing it, since from my point of
view, it stays in the C tradition, which I like.)
I have little experience with C++
How does C++ simplify memory management?
I know enough C++ to dislike it.

When objects go out of range, they get freed automatically.
Objects have destructors so you can embed all your cleanup in the
destructor and it gets invoked when you destroy the object or it goes
out of range.

Best thing about C++? The // comment delimiter. Now mostly adopted by
C.

The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
--
Dan Espen
Jorgen Grahn
2020-12-20 16:38:57 UTC
Permalink
Post by Dan Espen
Post by Dallas
Post by Jorgen Grahn
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a decade
or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.
(I am a big fan of the C++ way of doing it, since from my point of
view, it stays in the C tradition, which I like.)
I have little experience with C++
How does C++ simplify memory management?
...
Post by Dan Espen
When objects go out of range, they get freed automatically.
Objects have destructors so you can embed all your cleanup in the
destructor and it gets invoked when you destroy the object or it goes
out of range.
That's a good summary.

And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.

/Jorgen
--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
Dan Espen
2020-12-20 16:54:08 UTC
Permalink
Post by Jorgen Grahn
Post by Dallas
Post by Jorgen Grahn
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a
decade or so. So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory
management". Neither C++ nor Python (to name two popular
languages) rely on a garbage collector, but you still don't have to
manage memory manually.
(I am a big fan of the C++ way of doing it, since from my point of
view, it stays in the C tradition, which I like.)
I have little experience with C++
How does C++ simplify memory management?
...
When objects go out of range, they get freed automatically. Objects
have destructors so you can embed all your cleanup in the destructor
and it gets invoked when you destroy the object or it goes out of
range.
That's a good summary.
And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.
Thanks.

Where I work we had an interesting experience with C++ new vs C malloc.
I forget the exact numbers but with malloc, if you malloc a single byte,
there's overhead but I believe it's something like 4 bytes for thousands
of 1 byte mallocs. Every new takes at least 4 (could have been 8)
additional bytes for every single new. So, this application programmer
came to me and couldn't figure out why his program converted to C++ was
running out a memory. That's when I dug through some control blocks to
figure out what was going on.

So, warning, if you are using new for lots of small memory chunks, watch
out.
--
Dan Espen
Scott Lurndal
2020-12-21 18:13:34 UTC
Permalink
Post by Dan Espen
Post by Jorgen Grahn
...
When objects go out of range, they get freed automatically. Objects
have destructors so you can embed all your cleanup in the destructor
and it gets invoked when you destroy the object or it goes out of
range.
That's a good summary.
And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.
Thanks.
Where I work we had an interesting experience with C++ new vs C malloc.
I forget the exact numbers but with malloc, if you malloc a single byte,
there's overhead but I believe it's something like 4 bytes for thousands
of 1 byte mallocs. Every new takes at least 4 (could have been 8)
additional bytes for every single new. So, this application programmer
came to me and couldn't figure out why his program converted to C++ was
running out a memory. That's when I dug through some control blocks to
figure out what was going on.
So, warning, if you are using new for lots of small memory chunks, watch
out.
The g++ default implementation of the C++ 'new' operator simply calls malloc
on linux.

malloc will allocate memory that's aligned to the default platform alignment
(generally the size of the largest native data type, so 64 or 128 bits typically),
and will allocate 64 bits just before the returned address for heap state.
Dan Espen
2020-12-21 18:37:04 UTC
Permalink
Post by Scott Lurndal
Post by Dan Espen
Post by Jorgen Grahn
...
When objects go out of range, they get freed automatically. Objects
have destructors so you can embed all your cleanup in the destructor
and it gets invoked when you destroy the object or it goes out of
range.
That's a good summary.
And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.
Thanks.
Where I work we had an interesting experience with C++ new vs C malloc.
I forget the exact numbers but with malloc, if you malloc a single byte,
there's overhead but I believe it's something like 4 bytes for thousands
of 1 byte mallocs. Every new takes at least 4 (could have been 8)
additional bytes for every single new. So, this application programmer
came to me and couldn't figure out why his program converted to C++ was
running out a memory. That's when I dug through some control blocks to
figure out what was going on.
So, warning, if you are using new for lots of small memory chunks, watch
out.
The g++ default implementation of the C++ 'new' operator simply calls malloc
on linux.
malloc will allocate memory that's aligned to the default platform alignment
(generally the size of the largest native data type, so 64 or 128 bits typically),
and will allocate 64 bits just before the returned address for heap state.
I'm not sure what's going on but this code:

#include <stdlib.h>
int main() {
char *x = new char[10];
char *y = (char *)malloc(10);
}

with these compile options:

g++ -g -c -Wa,-alh x.C

Produces a call to _Znam for new, but a call to malloc for new.
--
Dan Espen
J. Clarke
2020-12-21 21:01:13 UTC
Permalink
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by Jorgen Grahn
...
When objects go out of range, they get freed automatically. Objects
have destructors so you can embed all your cleanup in the destructor
and it gets invoked when you destroy the object or it goes out of
range.
That's a good summary.
And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.
Thanks.
Where I work we had an interesting experience with C++ new vs C malloc.
I forget the exact numbers but with malloc, if you malloc a single byte,
there's overhead but I believe it's something like 4 bytes for thousands
of 1 byte mallocs. Every new takes at least 4 (could have been 8)
additional bytes for every single new. So, this application programmer
came to me and couldn't figure out why his program converted to C++ was
running out a memory. That's when I dug through some control blocks to
figure out what was going on.
So, warning, if you are using new for lots of small memory chunks, watch
out.
The g++ default implementation of the C++ 'new' operator simply calls malloc
on linux.
malloc will allocate memory that's aligned to the default platform alignment
(generally the size of the largest native data type, so 64 or 128 bits typically),
and will allocate 64 bits just before the returned address for heap state.
#include <stdlib.h>
int main() {
char *x = new char[10];
char *y = (char *)malloc(10);
}
g++ -g -c -Wa,-alh x.C
Produces a call to _Znam for new, but a call to malloc for new.
That last line makes no sense to me. Do I need more coffee or should
one of those "new"s be something else?
Dan Espen
2020-12-21 21:03:13 UTC
Permalink
Post by J. Clarke
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by Jorgen Grahn
...
When objects go out of range, they get freed automatically. Objects
have destructors so you can embed all your cleanup in the destructor
and it gets invoked when you destroy the object or it goes out of
range.
That's a good summary.
And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.
Thanks.
Where I work we had an interesting experience with C++ new vs C malloc.
I forget the exact numbers but with malloc, if you malloc a single byte,
there's overhead but I believe it's something like 4 bytes for thousands
of 1 byte mallocs. Every new takes at least 4 (could have been 8)
additional bytes for every single new. So, this application programmer
came to me and couldn't figure out why his program converted to C++ was
running out a memory. That's when I dug through some control blocks to
figure out what was going on.
So, warning, if you are using new for lots of small memory chunks, watch
out.
The g++ default implementation of the C++ 'new' operator simply calls malloc
on linux.
malloc will allocate memory that's aligned to the default platform alignment
(generally the size of the largest native data type, so 64 or 128 bits typically),
and will allocate 64 bits just before the returned address for heap state.
#include <stdlib.h>
int main() {
char *x = new char[10];
char *y = (char *)malloc(10);
}
g++ -g -c -Wa,-alh x.C
Produces a call to _Znam for new, but a call to malloc for new.
That last line makes no sense to me. Do I need more coffee or should
one of those "new"s be something else?
Yeah, I should have just posted the generated code.
new calls _Znam
malloc calls malloc
--
Dan Espen
Scott Lurndal
2020-12-22 19:05:44 UTC
Permalink
Post by Dan Espen
Post by J. Clarke
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by Jorgen Grahn
...
When objects go out of range, they get freed automatically. Objects
have destructors so you can embed all your cleanup in the destructor
and it gets invoked when you destroy the object or it goes out of
range.
That's a good summary.
And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.
Thanks.
Where I work we had an interesting experience with C++ new vs C malloc.
I forget the exact numbers but with malloc, if you malloc a single byte,
there's overhead but I believe it's something like 4 bytes for thousands
of 1 byte mallocs. Every new takes at least 4 (could have been 8)
additional bytes for every single new. So, this application programmer
came to me and couldn't figure out why his program converted to C++ was
running out a memory. That's when I dug through some control blocks to
figure out what was going on.
So, warning, if you are using new for lots of small memory chunks, watch
out.
The g++ default implementation of the C++ 'new' operator simply calls malloc
on linux.
malloc will allocate memory that's aligned to the default platform alignment
(generally the size of the largest native data type, so 64 or 128 bits typically),
and will allocate 64 bits just before the returned address for heap state.
#include <stdlib.h>
int main() {
char *x = new char[10];
char *y = (char *)malloc(10);
}
g++ -g -c -Wa,-alh x.C
Produces a call to _Znam for new, but a call to malloc for new.
That last line makes no sense to me. Do I need more coffee or should
one of those "new"s be something else?
Yeah, I should have just posted the generated code.
new calls _Znam
malloc calls malloc
$ c++filt _Znam
operator new[](unsigned long)

The function 'operator new[]' calls _Znwm which calls malloc.

Dump of assembler code for function _Znam:
=> 0x00007ffff7ac4370 <+0>: sub $0x8,%rsp
0x00007ffff7ac4374 <+4>: callq 0x7ffff7abdba8 <***@plt>
0x00007ffff7ac4379 <+9>: add $0x8,%rsp
0x00007ffff7ac437d <+13>: retq
0x00007ffff7ac437e <+14>: add $0x1,%rdx
0x00007ffff7ac4382 <+18>: mov %rax,%rdi
0x00007ffff7ac4385 <+21>: je 0x7ffff7ac438c <_Znam+28>
0x00007ffff7ac4387 <+23>: callq 0x7ffff7ac0458 <***@plt>
0x00007ffff7ac438c <+28>: callq 0x7ffff7abe578 <***@plt>

Dump of assembler code for function _Znwm:
=> 0x00007ffff7ac42c0 <+0>: push %rbx
0x00007ffff7ac42c1 <+1>: test %rdi,%rdi
0x00007ffff7ac42c4 <+4>: mov %rdi,%rbx
0x00007ffff7ac42c7 <+7>: mov $0x1,%eax
0x00007ffff7ac42cc <+12>: cmove %rax,%rbx
0x00007ffff7ac42d0 <+16>: mov %rbx,%rdi
0x00007ffff7ac42d3 <+19>: callq 0x7ffff7abe028 <***@plt>
0x00007ffff7ac42d8 <+24>: test %rax,%rax
0x00007ffff7ac42db <+27>: je 0x7ffff7ac42e0 <_Znwm+32>
0x00007ffff7ac42dd <+29>: pop %rbx
0x00007ffff7ac42de <+30>: retq
Post by Dan Espen
--
Dan Espen
Dan Espen
2020-12-22 19:13:23 UTC
Permalink
Post by Scott Lurndal
Post by Dan Espen
Post by J. Clarke
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by Jorgen Grahn
...
When objects go out of range, they get freed automatically. Objects
have destructors so you can embed all your cleanup in the destructor
and it gets invoked when you destroy the object or it goes out of
range.
That's a good summary.
And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.
Thanks.
Where I work we had an interesting experience with C++ new vs C malloc.
I forget the exact numbers but with malloc, if you malloc a single byte,
there's overhead but I believe it's something like 4 bytes for thousands
of 1 byte mallocs. Every new takes at least 4 (could have been 8)
additional bytes for every single new. So, this application programmer
came to me and couldn't figure out why his program converted to C++ was
running out a memory. That's when I dug through some control blocks to
figure out what was going on.
So, warning, if you are using new for lots of small memory chunks, watch
out.
The g++ default implementation of the C++ 'new' operator simply calls malloc
on linux.
malloc will allocate memory that's aligned to the default platform alignment
(generally the size of the largest native data type, so 64 or 128 bits typically),
and will allocate 64 bits just before the returned address for heap state.
#include <stdlib.h>
int main() {
char *x = new char[10];
char *y = (char *)malloc(10);
}
g++ -g -c -Wa,-alh x.C
Produces a call to _Znam for new, but a call to malloc for new.
That last line makes no sense to me. Do I need more coffee or should
one of those "new"s be something else?
Yeah, I should have just posted the generated code.
new calls _Znam
malloc calls malloc
$ c++filt _Znam
operator new[](unsigned long)
The function 'operator new[]' calls _Znwm which calls malloc.
=> 0x00007ffff7ac4370 <+0>: sub $0x8,%rsp
0x00007ffff7ac4379 <+9>: add $0x8,%rsp
0x00007ffff7ac437d <+13>: retq
0x00007ffff7ac437e <+14>: add $0x1,%rdx
0x00007ffff7ac4382 <+18>: mov %rax,%rdi
0x00007ffff7ac4385 <+21>: je 0x7ffff7ac438c <_Znam+28>
=> 0x00007ffff7ac42c0 <+0>: push %rbx
0x00007ffff7ac42c1 <+1>: test %rdi,%rdi
0x00007ffff7ac42c4 <+4>: mov %rdi,%rbx
0x00007ffff7ac42c7 <+7>: mov $0x1,%eax
0x00007ffff7ac42cc <+12>: cmove %rax,%rbx
0x00007ffff7ac42d0 <+16>: mov %rbx,%rdi
0x00007ffff7ac42d8 <+24>: test %rax,%rax
0x00007ffff7ac42db <+27>: je 0x7ffff7ac42e0 <_Znwm+32>
0x00007ffff7ac42dd <+29>: pop %rbx
0x00007ffff7ac42de <+30>: retq
So, on z/OS a 'new' acquired extra bytes in the malloced memory.
You might ask for 8 but 12 were acquired.

I see an add in there. Any idea why?
--
Dan Espen
Vir Campestris
2020-12-22 21:55:25 UTC
Permalink
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by J. Clarke
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by Jorgen Grahn
...
When objects go out of range, they get freed automatically. Objects
have destructors so you can embed all your cleanup in the destructor
and it gets invoked when you destroy the object or it goes out of
range.
That's a good summary.
And the reason I said "stays in the C tradition" is that it's still
structs on the stack, or embedded in other objects, with well-defined
lifetimes.
Thanks.
Where I work we had an interesting experience with C++ new vs C malloc.
I forget the exact numbers but with malloc, if you malloc a single byte,
there's overhead but I believe it's something like 4 bytes for thousands
of 1 byte mallocs. Every new takes at least 4 (could have been 8)
additional bytes for every single new. So, this application programmer
came to me and couldn't figure out why his program converted to C++ was
running out a memory. That's when I dug through some control blocks to
figure out what was going on.
So, warning, if you are using new for lots of small memory chunks, watch
out.
The g++ default implementation of the C++ 'new' operator simply calls malloc
on linux.
malloc will allocate memory that's aligned to the default platform alignment
(generally the size of the largest native data type, so 64 or 128 bits typically),
and will allocate 64 bits just before the returned address for heap state.
#include <stdlib.h>
int main() {
char *x = new char[10];
char *y = (char *)malloc(10);
}
g++ -g -c -Wa,-alh x.C
Produces a call to _Znam for new, but a call to malloc for new.
That last line makes no sense to me. Do I need more coffee or should
one of those "new"s be something else?
Yeah, I should have just posted the generated code.
new calls _Znam
malloc calls malloc
$ c++filt _Znam
operator new[](unsigned long)
The function 'operator new[]' calls _Znwm which calls malloc.
=> 0x00007ffff7ac4370 <+0>: sub $0x8,%rsp
0x00007ffff7ac4379 <+9>: add $0x8,%rsp
0x00007ffff7ac437d <+13>: retq
0x00007ffff7ac437e <+14>: add $0x1,%rdx
0x00007ffff7ac4382 <+18>: mov %rax,%rdi
0x00007ffff7ac4385 <+21>: je 0x7ffff7ac438c <_Znam+28>
=> 0x00007ffff7ac42c0 <+0>: push %rbx
0x00007ffff7ac42c1 <+1>: test %rdi,%rdi
0x00007ffff7ac42c4 <+4>: mov %rdi,%rbx
0x00007ffff7ac42c7 <+7>: mov $0x1,%eax
0x00007ffff7ac42cc <+12>: cmove %rax,%rbx
0x00007ffff7ac42d0 <+16>: mov %rbx,%rdi
0x00007ffff7ac42d8 <+24>: test %rax,%rax
0x00007ffff7ac42db <+27>: je 0x7ffff7ac42e0 <_Znwm+32>
0x00007ffff7ac42dd <+29>: pop %rbx
0x00007ffff7ac42de <+30>: retq
So, on z/OS a 'new' acquired extra bytes in the malloced memory.
You might ask for 8 but 12 were acquired.
I see an add in there. Any idea why?
Which one did you mean?

0x00007ffff7ac4379 <+9>: add $0x8,%rsp

is cleaning up the stack

0x00007ffff7ac437e <+14>: add $0x1,%rdx

is after the ret at the end of the function, and in whatever the next
one does.

Andy
Scott Lurndal
2020-12-22 21:58:08 UTC
Permalink
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
Post by J. Clarke
Post by Dan Espen
#include <stdlib.h>
int main() {
char *x = new char[10];
char *y = (char *)malloc(10);
}
g++ -g -c -Wa,-alh x.C
Produces a call to _Znam for new, but a call to malloc for new.
That last line makes no sense to me. Do I need more coffee or should
one of those "new"s be something else?
Yeah, I should have just posted the generated code.
new calls _Znam
malloc calls malloc
$ c++filt _Znam
operator new[](unsigned long)
The function 'operator new[]' calls _Znwm which calls malloc.
=> 0x00007ffff7ac4370 <+0>: sub $0x8,%rsp
0x00007ffff7ac4379 <+9>: add $0x8,%rsp
0x00007ffff7ac437d <+13>: retq
0x00007ffff7ac437e <+14>: add $0x1,%rdx
0x00007ffff7ac4382 <+18>: mov %rax,%rdi
0x00007ffff7ac4385 <+21>: je 0x7ffff7ac438c <_Znam+28>
=> 0x00007ffff7ac42c0 <+0>: push %rbx
0x00007ffff7ac42c1 <+1>: test %rdi,%rdi
0x00007ffff7ac42c4 <+4>: mov %rdi,%rbx
0x00007ffff7ac42c7 <+7>: mov $0x1,%eax
0x00007ffff7ac42cc <+12>: cmove %rax,%rbx
0x00007ffff7ac42d0 <+16>: mov %rbx,%rdi
0x00007ffff7ac42d8 <+24>: test %rax,%rax
0x00007ffff7ac42db <+27>: je 0x7ffff7ac42e0 <_Znwm+32>
0x00007ffff7ac42dd <+29>: pop %rbx
0x00007ffff7ac42de <+30>: retq
So, on z/OS a 'new' acquired extra bytes in the malloced memory.
You might ask for 8 but 12 were acquired.
I see an add in there. Any idea why?
The first 'sub' allocates 8 bytes of stack space, then _Znam
calls _Znwm (which demangles to "operator new(unsigned long)"),
when _Znwm returns, the 'add' deallocates the 8 bytes of stack
space. The code following the 'retq' is for unwinding the stack
in case an exception is thrown; the add in this code is opaque
to me, but I assume that the exception 'throw' code which invokes
this has left something interesting in %rdx which is the register
in which the third parameter to a function is passed to the function
in x86_64 mode (%rdi, %rsi, %rdx, %rcx, %r8, %r9 carry the first six
function parameters into any called x86_64 function in the standard
linux ABI). The subsequent 'je' will test the condition flags set
by the add and either finish the unwind or ABEND. My guess is that
it's the lexical stack depth as a negative integer, and if it reaches zero
after the add, there are no stack frames left to unwind.

There was similar prologue code after the retq for _Znwm, but I didn't
past that in the OP.
Ahem A Rivet's Shot
2020-12-20 17:04:35 UTC
Permalink
On Sun, 20 Dec 2020 10:43:43 -0500
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
I'm inclined to agree but without templates C++ would be really
nasty because of the way strict typing gets in the way of polymorphism.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
Dan Espen
2020-12-20 18:19:20 UTC
Permalink
Post by Ahem A Rivet's Shot
On Sun, 20 Dec 2020 10:43:43 -0500
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
I'm inclined to agree but without templates C++ would be really
nasty because of the way strict typing gets in the way of polymorphism.
Yes it's powerful, but no language should be more complicated than the
problems you're trying to solve with it.

Where I work one of my tasks was to try guide other developers through
the compile/link process. HP/UX and z/OS presented compile processes
that were too complicated for the average developer to deal with.
--
Dan Espen
Ahem A Rivet's Shot
2020-12-20 18:40:30 UTC
Permalink
On Sun, 20 Dec 2020 13:19:20 -0500
Post by Dan Espen
Post by Ahem A Rivet's Shot
On Sun, 20 Dec 2020 10:43:43 -0500
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
I'm inclined to agree but without templates C++ would be really
nasty because of the way strict typing gets in the way of polymorphism.
Yes it's powerful, but no language should be more complicated than the
problems you're trying to solve with it.
That is quoteworthy. I hate it when I have to fight the language to
express the solution to the problem.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
J. Clarke
2020-12-20 19:03:46 UTC
Permalink
Post by Dan Espen
Post by Ahem A Rivet's Shot
On Sun, 20 Dec 2020 10:43:43 -0500
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
I'm inclined to agree but without templates C++ would be really
nasty because of the way strict typing gets in the way of polymorphism.
Yes it's powerful, but no language should be more complicated than the
problems you're trying to solve with it.
Where I work one of my tasks was to try guide other developers through
the compile/link process. HP/UX and z/OS presented compile processes
that were too complicated for the average developer to deal with.
I don't know HP/UX but this is the nature of Z/OS.
Dan Espen
2020-12-20 23:09:21 UTC
Permalink
Post by J. Clarke
Post by Dan Espen
Post by Ahem A Rivet's Shot
On Sun, 20 Dec 2020 10:43:43 -0500
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
I'm inclined to agree but without templates C++ would be really
nasty because of the way strict typing gets in the way of polymorphism.
Yes it's powerful, but no language should be more complicated than the
problems you're trying to solve with it.
Where I work one of my tasks was to try guide other developers through
the compile/link process. HP/UX and z/OS presented compile processes
that were too complicated for the average developer to deal with.
I don't know HP/UX but this is the nature of Z/OS.
Yep, z/OS was pretty tightly tied to 8 character all upper case external
symbols. IBM seemed to struggle with the problem. It looked like, at
first, they didn't want to mess with the linker. Eventually they had
to give in and create the Binder. They took a left turn into the
pre-linker for a while. It was quite a mess.

HP/UX is just another Unix flavor. Like I said, I don't remember a lot
of details but somehow the other UNIX vendors managed to hide the
ugliness better than HP and IBM did.
--
Dan Espen
Thomas Koenig
2020-12-21 08:11:37 UTC
Permalink
Post by Dan Espen
Yep, z/OS was pretty tightly tied to 8 character all upper case external
symbols. IBM seemed to struggle with the problem. It looked like, at
first, they didn't want to mess with the linker.
That linker was a pig (or still is, I guess).

I remember, back in the day, writing programs that used a Calcomp
graphics library on a 3090. They took around 20 minutes, wall
time, to link. As a student assistant, I was paid by the hour,
but it as still aggravating.

It was a revelation when the first HP workstations arrived, and
gnuplot came along.

If you've ever wondered why gnuplot's PostScript terminal has all
these circles and squares where each quadrant is either filled
or isn't (just run "test" on a PostScript terminal in gnuplot):
I created these for a friend. His advisor had extremely strict
standards of what graphs should look like in a PhD thesis,
and it had always been done by a draftsman before.
Post by Dan Espen
HP/UX is just another Unix flavor. Like I said, I don't remember a lot
of details but somehow the other UNIX vendors managed to hide the
ugliness better than HP and IBM did.
Compiling software from the net could be a headache. Of course,
these were the days where everything had #ifdef __HPUX__ or similar,
long before today's configure scripts.

Of course, I haven't used HP-UX for more than 20 years. It would
be interesting to sit down at one of these old boxes and try the
look and feel today.
Peter Flass
2020-12-22 17:39:19 UTC
Permalink
Post by Thomas Koenig
Post by Dan Espen
Yep, z/OS was pretty tightly tied to 8 character all upper case external
symbols. IBM seemed to struggle with the problem. It looked like, at
first, they didn't want to mess with the linker.
That linker was a pig (or still is, I guess).
I remember, back in the day, writing programs that used a Calcomp
graphics library on a 3090. They took around 20 minutes, wall
time, to link. As a student assistant, I was paid by the hour,
but it as still aggravating.
I never experienced problems with the Linkage Editor. Admittedly it wasn’t
a speed demon, partly because it provided a lot of capabilities not often
used, but it was never a bottleneck.

All academic computers I have encountered have been majestically
underpowered. At one employer administrative users were told to stay off
the system completely the last two weeks of the term so students could get
their projects done. Online response time was measured in minutes, except
when the system would crash due to overload.

No other organization would tolerate this. My last employer had peak
workloads twice a year, so the system was sized to provide good response
time during that period, which meant it was way bigger than was needed the
other 300 days a year. That’s why IBM instituted variable capacity,
whatever it’s called. Pay for a smaller system most of the time and turn on
turbo when you need it.
Post by Thomas Koenig
It was a revelation when the first HP workstations arrived, and
gnuplot came along.
That’s why workstations and PCs became popular.
--
Pete
Charlie Gibbs
2020-12-21 17:33:55 UTC
Permalink
Post by Dan Espen
Yep, z/OS was pretty tightly tied to 8 character all upper case external
symbols. IBM seemed to struggle with the problem. It looked like, at
first, they didn't want to mess with the linker. Eventually they had
to give in and create the Binder. They took a left turn into the
pre-linker for a while. It was quite a mess.
HP/UX is just another Unix flavor. Like I said, I don't remember a lot
of details but somehow the other UNIX vendors managed to hide the
ugliness better than HP and IBM did.
Or MICROS~1... :-)
--
/~\ Charlie Gibbs | "Some of you may die,
\ / <***@kltpzyxm.invalid> | but it's a sacrifice
X I'm really at ac.dekanfrus | I'm willing to make."
/ \ if you read it the right way. | -- Lord Farquaad (Shrek)
Peter Flass
2020-12-21 00:35:54 UTC
Permalink
Post by J. Clarke
Post by Dan Espen
Post by Ahem A Rivet's Shot
On Sun, 20 Dec 2020 10:43:43 -0500
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
I'm inclined to agree but without templates C++ would be really
nasty because of the way strict typing gets in the way of polymorphism.
Yes it's powerful, but no language should be more complicated than the
problems you're trying to solve with it.
Where I work one of my tasks was to try guide other developers through
the compile/link process. HP/UX and z/OS presented compile processes
that were too complicated for the average developer to deal with.
I don't know HP/UX but this is the nature of Z/OS.
Not so. Every place I’ve worked had compile-and-link procs, which were
created once, and Danny Developer never had to code more than “// EXEC
LINK,PGM=xxxxxxxx”
--
Pete
Scott Lurndal
2020-12-21 18:10:12 UTC
Permalink
Post by Dan Espen
Post by Dallas
Post by Jorgen Grahn
I took a multi-year hiatus from Usenet. Went from needing serious
consultation on C using comp.lang.c when I was a C programmer to
looking at C as folklore after not using it (or Usenet) for a decade
or so.
So I missed my chance to interact with Dennis here.
I would hate to live without a garbage collector these days.
You probably mean "I would hate to live with C's memory management".
Neither C++ nor Python (to name two popular languages) rely on a
garbage collector, but you still don't have to manage memory manually.
(I am a big fan of the C++ way of doing it, since from my point of
view, it stays in the C tradition, which I like.)
I have little experience with C++
How does C++ simplify memory management?
I know enough C++ to dislike it.
C++ can be palatable if you think of it as C with classes and
be judicious in which language features you use.
Post by Dan Espen
When objects go out of range, they get freed automatically.
Objects have destructors so you can embed all your cleanup in the
destructor and it gets invoked when you destroy the object or it goes
out of range.
Which is quite handy, in many cases.
Post by Dan Espen
Best thing about C++? The // comment delimiter. Now mostly adopted by
C.
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
That was definitely true in 1991. They're completely invisible with
modern compilers.
Dan Espen
2020-12-21 18:23:07 UTC
Permalink
Post by Scott Lurndal
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
That was definitely true in 1991. They're completely invisible with
modern compilers.
Well, that wasn't true on z/OS when I retired 5 years ago.
I really doubt that's changed.
--
Dan Espen
J. Clarke
2020-12-21 18:37:23 UTC
Permalink
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
That was definitely true in 1991. They're completely invisible with
modern compilers.
Well, that wasn't true on z/OS when I retired 5 years ago.
I really doubt that's changed.
I find myself wondering what Scott means when he says "modern
compilers".
Scott Lurndal
2020-12-22 18:59:52 UTC
Permalink
Post by J. Clarke
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
That was definitely true in 1991. They're completely invisible with
modern compilers.
Well, that wasn't true on z/OS when I retired 5 years ago.
I really doubt that's changed.
I find myself wondering what Scott means when he says "modern
compilers".
The GNU Compiler collection.
Greenhills C++.
Wind Rivers Diab Compilers.
The Intel C++ compiler.

and a half dozen others. You can't have a modern compiler when
you are stuck with JCL :-)
Dan Espen
2020-12-22 19:23:18 UTC
Permalink
Post by Scott Lurndal
Post by J. Clarke
Post by Dan Espen
Post by Scott Lurndal
Post by Dan Espen
The very worst thing? Templates. Templates led to some of the worst
compile/link procedures I've ever seen.
That was definitely true in 1991. They're completely invisible with
modern compilers.
Well, that wasn't true on z/OS when I retired 5 years ago.
I really doubt that's changed.
I find myself wondering what Scott means when he says "modern
compilers".
The GNU Compiler collection.
Greenhills C++.
Wind Rivers Diab Compilers.
The Intel C++ compiler.
and a half dozen others. You can't have a modern compiler when
you are stuck with JCL :-)
Not sure how JCL comes into the picture.
As I've explained before, I developed z/OS tools to do compiles using
CLIST.

Of course you could also invoke the exact same compiler using z/OS
Unix system services.

IBM came up with 2 C compilers. I think IBM might have done the first
one, C/370. When they wanted ANSI C they farmed out compiler
development, this was around 2000. I don't think I ever found out
who did the actual work, I just got the impression it wasn't IBM.

I think your point might be, it's hard to do seamless stuff when
you want to cater to 100% compatibility for everything that's
come before. But it wasn't JCL, object code format, load module format,
the binder, inter-language calls to all the other languages might have
played a role.
--
Dan Espen
Thomas Koenig
2020-12-21 19:31:24 UTC
Permalink
Post by Scott Lurndal
C++ can be palatable if you think of it as C with classes and
be judicious in which language features you use.
Just read on a mailing list:

# Hiring someone else to write your C++ code is probably a good idea
# for preserving sanity. Although having to read the code later
# will undo any of the previously mentioned benefits.
Ahem A Rivet's Shot
2020-12-14 20:17:22 UTC
Permalink
On Mon, 14 Dec 2020 13:30:10 -0600
Post by Dallas
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
He was a regular poster in here too.
--
Steve O'Hara-Smith | Directable Mirror Arrays
C:\>WIN | A better way to focus the sun
The computer obeys and wins. | licences available see
You lose and Bill collects. | http://www.sohara.org/
maus
2020-12-14 21:24:52 UTC
Permalink
Post by Dallas
Post by Bob Eager
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).
And I met Ritchie and Thompson once, briefly.
This is all quite worrying as some of them are now dead.
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
Also here. I rememner a post that ascribed to medicaid (I don't really
understand the US social aid systems) to his continued survival.
--
***@mail.com
J. Clarke
2020-12-15 00:30:49 UTC
Permalink
Post by maus
Post by Dallas
Post by Bob Eager
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).
And I met Ritchie and Thompson once, briefly.
This is all quite worrying as some of them are now dead.
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
Also here. I rememner a post that ascribed to medicaid (I don't really
understand the US social aid systems) to his continued survival.
I think you're thinking of Medicare--Medicaid is means-tested,
Medicare is for those over 65. He retired at an age when he was
qualified for Medicare, but I'm pretty sure that his Lucent salary
would have been well above the threshold for Medicaid.
Dan Espen
2020-12-15 02:06:36 UTC
Permalink
Post by J. Clarke
Post by maus
Post by Dallas
Post by Bob Eager
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
I have actually met David Barron on a number of occasions. I have had
dinner with David Wheeler several times. When I went up to Martin
Campbell-Kelly (who I had admired for years) at a meeting a couple of
years ago, I was gobsmacked to find out that he already knew who *I* was!
I have had a lot of contact with Martin Richards (I used to run a BCPL
user group).
And I met Ritchie and Thompson once, briefly.
This is all quite worrying as some of them are now dead.
I remember when Dennis Ritchie would post on the newsgroup comp.lang.c
Also here. I rememner a post that ascribed to medicaid (I don't really
understand the US social aid systems) to his continued survival.
I think you're thinking of Medicare--Medicaid is means-tested,
Medicare is for those over 65. He retired at an age when he was
qualified for Medicare, but I'm pretty sure that his Lucent salary
would have been well above the threshold for Medicaid.
At that time he would have retired with a pension and company paid
health insurance. So, he'd still be using Medicare but would never
need Medicaid.
--
Dan Espen
maus
2020-12-14 19:08:34 UTC
Permalink
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
Thanks, very informative.. Funny how the name stratchely came up again.
--
***@mail.com
Robin Vowels
2021-02-10 15:42:20 UTC
Permalink
Post by Peter Flass
https://apple.news/A_-wNCMkYTUqUYmLw17kK-A
.
Interesting, but the compiler for the reduced Atlas had not eventuated
by 1965 after three years of trying to redesign Algol 60.
.
Meanwhile, Algol was available on the English Electric DEUCE
and KDF9 in 1963.
The DEUCE had 384 words of high speed store, and a magnetic
drum of 8K words.

Loading...