My comment is that he should have built more of it in Lisp since it's a better language. First he needed a better Lisp (ie. high-performance Lisp). Then he would have had a better foundation. Wouldn't we be better off with hundreds of millions of lines of free software in Lisp rather than C?
You are probably not aware of the details of what he did with Emacs and GCC. The short answer is: it was possible to implement reasonably effective Lisp in Emacs, but it was impossible to replace the most of Unix infrastructure in it. Therefore the GCC was necessary.
It's not about the "theoretical possibilities" it's about the practical limitations.
I can't refer to you to any single resource regarding the limitations of any Lisp for implementing the free-source replacement of Unix, but if you investigate how the Lisp in Emacs was actually implemented you can get the picture. Provided you have enough basic knowledge about the low level aspects of the CPUs, compilers and interpreters and also the limitations of all the hardware before the most modern CPUs. Even today, what's "good enough" on the desktop can be "too much demanded" from the lower power systems. But the lower power systems of today are also much more advanced than what we had for decades during which GNU project grew.
GCC 1.0 was released in 1987. The 2000 USD computer had less than 1 MB of RAM then.
Million polygon levels running at 60fps when the majority of games on that system had 1-2 orders of magnitude less geometry and ran at 30fps and were written in C/C++
I'm NOT saying therefore RMS should have used lisp instead of C but I am suggesting that any perf objections or memory objections are mostly not factual. It's perfectly possible to make a performant non-memory hungry lisp.
Written using Allegro Common Lisp, a "commercial implementation of the Common Lisp programming language developed by Franz Inc." Which was related to the Macsyma and going proprietary, and "the closing of the MIT Lisp
and Macsyma efforts was a key reason Richard Stallman decided to form
the Free Software Foundation."
Thanks for the slides (the second link)! What I can conclude is that only the game scripting was ever based on Lisp, never the whole engine? This sounds much less impressive and makes the achieved FPS claims almost irelevant.
And there were some other opinions on usefulness of such approach:
"Despite its advantages, the concept of implementing the game logic in a separate scripting language and writing an interpreter for it was soon dropped (even by John Carmack who had implemented this concept) because of the overall inflexibility of an interpreted language,[3]"
On another side, there are a some games which used Lua for scripting:
As for the arguments that you can't interface with others because they don't know lisp and there aren't lots of libraries that's true of every language at some point in their lifecycle. There might be other reasons that's never happened for lisp or it might just be that no one has successfully made a concerted effort to get it there?
Maybe one of the LLVM based lisps would help a transition or at least let people try lisp on part of their C/C++ code base
"Of the 1.2 million lines of code, roughly 900,000 lines are written in GOAL."
Still that leaves 300,000 lines which were certainly the most low-level and performance-sensitive ones.
The 900,000 lines aren't exactly "the Lisp," but that custom language, developed using the commercial Lisp, but I admit it's certainly an engineering achievement.
Yes, I am aware. I didn't say that GCC wasn't necessary. Performance does matter. However, if he had a better Lisp then as Moore's Law advanced over the past 3 decades, Lisp could have become more common. Emacs Lisp is slow and it's also not a standard like Common Lisp or Scheme.
We'd all agree C isn't as necessary today. Lots of great stuff is done in Ruby, Python, JavaScript, Node.js, Clojure, etc. Developers are building editors in browsers (atom.io) and Microsoft used atom with 200k of Typescript in their new cross-platform tools.
There are loads of languages like that, I don't think that you can blame rms for this. For example, the MLs are currently kinda hot, ocaml especially, and have been around for donkey's years; but in the early 1990s they were very slow.
Lots of the applications written in ML are barely viable; the only reason they're performant now is due to Moore's law.
It doesn't matter what was before 3 decades. You have the environment that exists now. You can use better Lisp, if it's good enough now, for what you need to do. For the resource limited and performance critical stuff I still use C and a lot of actual environments still need C-level performance and the resource use patterns.
And all this is not because of what RMS did or didn't do. He did great things, and I don't agree that anybody can claim that he could have done something differently given the real constrains he had.
Maybe I should have said "a PC" instead of "a computer." As in, something that actually had a hard disk. Did GNU software run on Amiga 500? Had it been a good decision to target it instead of the PC-compatibles? I don't think so.
Stallman likely worked in the LISP your describing. He was a researched at MIT's AI lab. Their the LISP machines they went on to produce were actually a hybrid natively assembled OOP LISP, aimed at solving a lot of the low level issues you'll run into writing a kernel/system tools in LISP. I haven't worked in this language, I've just seen it referenced in a large number of places.
As he ultimately decided to not go this direction, I'm going to assume he considered it more of a dead end.