Hacker News new | past | comments | ask | show | jobs | submit login
John McCarthy’s collection of numerical facts for use in elisp programs (stanford.edu)
279 points by chrchr on Sept 7, 2023 | hide | past | favorite | 79 comments



"It contains a lot of units conversions and also astronomical facts that I needed for some research on moving Mars to a more temperate ___location."

lmfao, anywhere I can read more about McCarthy's plans for this?



Wow, thank you! This made my day: "Consider the possibility that introducing an arbitrarily small tame asteroid could throw a planet out of the solar system. This would be a different kind of instability than those that have been so far considered. It would be of mathematical interest to prove that a system consisting of a sun and three planets could be disrupted by a single arbitrarily small tame asteroid."


This is considerably more interesting that the original link...


The sci-fi book practically writes itself. A wealthy entrepreneur puts an asteroid in motion to move mars and sets up a long term foundation to keep it operational for the next 40,000 years. In conjunction with the orbital adjustment, terraforming begins and the first enclosed permanent settlements are founded. After the orbital changes are complete, the new planet is open for human development and is renamed from Mars to Musk, after its visionary founder.


Of course things go horribly wrong, and Mars crashes into earth. The moral of the story being if you don't have ultra-superior technology and experience doing the same thing outside of your own solar system and checking the results to adjust your procedures, then nudging things and seeing what happens at planetary scale is playing with fire.


I thought it was the short stories "Sunken Gardens" and "Cicada Queen" by Bruce Sterling from his Shaper/Mechanist universe.

Oh well.


Is there any better embodiment of entrepreneurial capitalism than stealthily appropriating an entire planet, privatising it, and then naming it after one's self? ... I'd be hard pressed to think of one.


Hans Moravec is really into space elevators. Is there some correlation between being an AI researcher and having an interest in space mega-engineering?


Almost certainly, simply because they both strongly correlate with being geeks. And SF geeks in particular. And the very particular sort of hard SF geek who feels right at home at Shock Level 4 [1].

[1] http://sl4.org/shocklevels.html and literally on http://sl4.org/


yes, it's known as the nerd space elevator strange attractor. It is frequently observed in nature with its related chaotic entities, the national-scale DC power grid attractor, and the singularity attractor (some argue that all these attractors are really just specific instances of the singularity attractor).


It's a nice hard problem with lots of juicy subproblems that you can work on even through the central problem (materials science) is nowhere near being solvable.


John McCarthy was Moravec's PhD advisor.


I wonder what will happen to the others planets, on a long scale... in a dynamical (chaotic) system stability is not granted, probably a bunch of asteroids-tools are needed to correct the deviation of all others planets form a desired configuration.


[flagged]


Careful, AI has been known to lie.


I know, still I an not sure what is better, asking an LLM or googling? (google search has been quietly deteriorating)


As time progresses "better" becomes "going to the public library and working with a research librarian to figure it out."


LLM then Google? "The greening of Mars" is not by Stanley Robinson, but by James Lovelock and Michael Allaby. It doesn't appear to feature moving Mars, but terraforming it using 80's technology. I don't think "Green Mars" by Robinson has Mars being moved either.

There's nothing on Google for Benford and "Martian Transfer", though it's possible he wrote such a story with a different title.


LLM approximates throwing the google snippets into a blender so that you can't evaluate anything and the nouns are very likely to get swapped.

Googling is much better.


Maybe an easier job would be to cool Venus by blocking sunlight from reaching it. Then, terraform it.


An easier job would be maintaing the habitability of the near-perfect planet we already have x) but it seems we can't even to that! Seems silly when SV tech bros say terraforming other planets will be what "saves humanity" or some such vs.


What it gives us is a backup plan for humanity. There are many reasons the Earth could become uninhabitable, hardly all of them caused by man. For example, an asteroid hit.


temporal proximity to now, closest to furthest

0. climate change has already entered an irreversible spiral

1. decades from now, the irreversible spiral having deepened and accelerated, civilization as we know it has been destroyed, leaving no authorities to do anything about climate change

2. decades to centuries from now, climate change makes earth mostly uninhabitable

3. hundreds or thousands of years in the future, if they weren't all dead, humans perhaps would have been able to maintain a self-sustaining colony of all for humanity another planet

there's not a realistic path for humanity to survive an earth-destroying asteroid, and there never will be


And then you get a world bathed in eternal darkness until you control it's greenhouse gas atmosphere?

I think it would be easier to just limit greenhouse gas emissions on earth.

Your point works pretty well considering that Venus has a a dense atmosphere unlike Mars.


Just need to reduce the solar incidence, not eliminate it.

The same technology could be used to reduce gw on the earth:

1. an orbiting sunshade

2. putting something in the atmosphere that makes it more reflective, like sulfur dioxide

I wonder how big a sunshade would need to be to reduce temps by, say, one degree. A specific ___location could be shaded by putting the shade in geosynchronous orbit. To finance the sunshade, put a Starbucks or Coca-Cola logo on it.


The kind of sunshades that are reasonably being proposed are not large enough to see from the earth without a very powerful telescope.

A single giant megaproject sunshade is not feasible. But it is possible to make many smaller ones, each diminishing insolation by just a few fractions of a percent.


Or a giant “Welcome to Disney’s World.”


Since so many believe aliens are here already we might as well start advertising to them.


It certainly would. However, these are centuries long plans. A path along which the local issues will be solved.


the intellectual attraction lies exactly in terraforming other planets besides earth. Dreaming big.

> I think it would be easier to just limit greenhouse gas emissions on earth.

Of course it would, but that's besides the point


There’s a great video on terraforming Venus: https://youtu.be/G-WO-z-QuWI?si=7D-mamEMlkHgNkgS


"To speed things up, CO2 would be strategically released to supply the plants and cyanobacteria."

Somehow I doubt our ability to do this strategically...




But me big man. Me move mars.


Maybe it's for the better that McCarthy passed away in 2011; if he were still alive, he might have gotten a grant from SpaceX to put his scheme into action.


I think you mean "put his emacs lisp into action"


Does spaceX give out grants to programmers?


This might come in handy but it's much inferior to the frink units file [0] which is much more complete, more rigorous, and includes dimensions.

[0]: https://frinklang.org/frinkdata/units.txt


That rant about the hertz is HN top material in and of itself.


> I think the candela is a scam


Oh my god this is incredible


Some may find it interesting that John McCarthy coined the term Artificial Intelligence and was called a/the father of AI before the deep learning folks took over. This AI referred to the symbolic flavor not the connectionist one.

Nowadays (god-)fathers of AI are Geoff Hinton and Yann LeCun and others, but 20 years ago things were very different…


20 years ago symbolic reasoning was already called GOFAI and the hotness was Bayesian statistical reasoning. 40 years ago, maybe.


I was a kid back then, so not really aware of the cutting edge research happening in those days, but here’s for example the description of Russel & Norvig’s “Artificial Intelligence: A Modern Approach (2nd edition, published 2003)”:

“The long-anticipated revision of this best-selling book offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Intelligent Agents. Solving Problems by Searching. Informed Search Methods. Game Playing. Agents that Reason Logically. First-order Logic. Building a Knowledge Base. Inference in First-Order Logic. Logical Reasoning Systems. Practical Planning. Planning and Acting. Uncertainty. Probabilistic Reasoning Systems. Making Simple Decisions. Making Complex Decisions. Learning from Observations. Learning with Neural Networks. Reinforcement Learning. Knowledge in Learning. Agents that Communicate. Practical Communication in English. Perception. Robotics. For those interested in artificial intelligence.”


By necessity, textbooks lag the actual work in most fields. Nonetheless the titular “modern approach” is the statistical reasoning (plus a heavy focus on search over inference).


> Nowadays (god-)fathers of AI are Geoff Hinton and Yann LeCun

Schmidhuber part of the story should also considered more, I think


May I ask for more details? I don’t know much about the ML DL scenes, is he related to LSTM and European side of the story?


He and Sepp Hochreiter invented the LSTM model.


> This AI referred to the symbolic flavor not the connectionist one

that's not a fair characterization at all. The title contains the term "Artificial Intelligience", and soon after item 3 in the proposal was "Neuron Nets". The word symbolic is not used at all, although the term Abstractions is used.

http://www-formal.stanford.edu/jmc/history/dartmouth/dartmou...

A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE

J. McCarthy, Dartmouth College M. L. Minsky, Harvard University N. Rochester, I.B.M. Corporation C.E. Shannon, Bell Telephone Laboratories

August 31, 1955

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

The following are some aspects of the artificial intelligence problem:

1 Automatic Computers

If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.

2. How Can a Computer be Programmed to Use a Language

It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.

3. Neuron Nets

How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.

4. Theory of the Size of a Calculation

If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.

5. Self-lmprovement

Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.

6. Abstractions

A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.

7. Randomness and Creativity

A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.

In addition to the above collectively formulated problems for study, we have asked the individuals taking part to describe what they will work on. Statements by the four originators of the project are attached.

... the paper continues ...


As a Lisper. I’ll always refer to McCarthy and Minsk’s as the fathers of AI.


I guess if you want a library for linear algebra operations in Lisp, you just write them yourself. From McCarthy's collection:

;;; multiplying a matrix by a column vector (defun mvmult (matrix vector) (list (scap (nth 0 matrix) vector) (scap (nth 1 matrix) vector)))

;;; sum of two vectors (defun vplus (vec1 vec2) (list (+ (nth 0 vec1) (nth 0 vec2)) (+ (nth 1 vec1) (nth 1 vec2))))

;;; difference of two vectors (defun vminus (vec1 vec2) (list (- (nth 0 vec1) (nth 0 vec2)) (- (nth 1 vec1) (nth 1 vec2))))

;;; scalar product of two vectors (defun scap (vec1 vec2) (+ (* (nth 0 vec1) (nth 0 vec2)) (* (nth 1 vec1) (nth 1 vec2))))

;;; product of scalar and vector (defun svmult (sca vec) (list (* sca (nth 0 vec)) (* sca (nth 1 vec))))

;;; sum of a list of vectors (defun addup (veclist) (if (null veclist) zerovec (vplus (car veclist) (addup (cdr veclist)))))

(defconst zerovec '(0 0) "zero vector with two components"1)

;;; length of a vector (defun length (x) (sqrt (+ (expt (nth 0 x) 2) (expt (nth 1 x) 2))))

(defconst Imatrix '((1.0 0.0) (0.0 1.0)) "unit 2x2 matrix")

(defun smmult (sca matrix) (list (svmult sca (nth 0 matrix)) (svmult sca (nth 1 matrix))))

(defun mplus (mat1 mat2) (list (vplus (nth 0 mat1) (nth 0 mat2)) (vplus (nth 1 mat1) (nth 1 mat2))))

(defun mminus (mat1 mat2) (list (vminus (nth 0 mat1) (nth 0 mat2)) (vminus (nth 1 mat1) (nth 1 mat2))))

(defun mmult (mat1 mat2) (list (list (scap (nth 0 mat1) (col 0 mat1)) (scap (nth 0 mat1) (col 1 mat1))) (list (scap (nth 1 mat1) (col 0 mat1)) (scap (nth 1 mat1) (col 1 mat1)))))

(defun multiplyup (matlist) (if (null matlist) Imatrix (mmult (car matlist) (multiplyup (car matlist) (multiplyup (cdr (matlist)))))))


Of course one can do this and it might work as a stopgap solution and abstraction layer until something different comes along and I have done similar, but ultimately this will not be very efficient. It might be a better, albeit less beautiful, solution to use any low level language inplemented linear algebra bindings for the lisp at hand. Hopefully there existing one.



I made a GitHub repo mirroring it: https://github.com/armcknight/john-mccarthy-numerical-facts

Also saw someone else just added it to one of theirs: https://github.com/bbarclay7/bb-emacs/blob/5858823bb033be113...


Now we know: John McCarthy used (x)emacs. Clearly an important revelation in the ongoing browser wars. /s (and not to minimize how genuinely interesting this is)


What's interesting to me is that the creator of Lisp found Emacs Lisp acceptable for practical general use. (Although as noted this file can be loaded verbatim into Common Lisp and work fine.)


McCarthy did not invent Lisp with the expectation that it even could be used as a language for real computers. Steve Russell had to convince him that McCarthy's theoretical definition of Lisp would actually work by actually implementing the interpreter himself.

I don't think McCarty would have been too snobbish in dialects, considering he wasn't expecting a practical language at all :D


MacCarthy was definitely conducting Lisp as a language implementation project. He was just surprised that his specification of the evaluation could be literally translated to code and executed; at first he thought Russell misunderstood that the spec was just for people to read, not for the machine. Someone was supposed to read that and implement the behavior.

It's kind of hard to understand why MacCarthy was surprised. Anyone sitting down to follow the spec and make it work on the machine would soon realize that they are just mechanically converting the expressions to code. Like where the specification calls APPLY with certain arguments, your machine code has to obtain those argument values and call APPLY in the same way. It will soon be obvious that you're just hand-coding exactly what is written. People were already doing that. FORTRAN is "formula translator"; before Fortran, formulas had to be translated to code by hand. Russell was just "fortranning" the formulas in the Lisp specification, like you would arithmetic formulas.


> McCarthy did not invent Lisp with the expectation that it even could be used as a language for real computers.

He was expecting a practical language and was designing one. Lisp was from day zero a project to implement a real programming language for a real computer.

Earlier he experimented with IPL and also list processing / functional programming in Fortran.

The original plan was to implement a Lisp compiler. At first the Lisp code, which McCarthy was experimenting with, was manually translated to machine code. So before the first Lisp compiler or interpreter existed, he was experimenting with how to represent and make the code running on a computer.

He then invented a Lisp interpreter in Lisp. Then came up the idea (-> Steve Russell) to use this EVAL as a base for an interpreter implementation, which was also implemented by manually translating the Lisp code to machine language. McCarthy at first thought that it was not possible to use this Lisp evaluator written in Lisp as a base, but Steve Russell implemented it.

Around 1962 then a Lisp compiler followed.

The paper by Herbert Stoyan describes the history of McCarthy's Lisp project:

https://github.com/papers-we-love/papers-we-love/blob/main/c...

> considering he wasn't expecting a practical language at all :D

He was expecting a practical language, since he was designing one and the team was implementing it. He designed data structures, core functions, evaluation, garbage collection as memory management, input and output, ... The language was to be used as a list processing language for research in the upcoming field of Artificial Intelligence: game play, natural language dialogs, symbolic maths (integration, ...), logic reasoning, computer vision, robotics, ...

He wrote several design documents on the language and its implementation. There even is the Lisp I Programmer's Manual from March 1960:

https://bitsavers.org/pdf/mit/rle_lisp/LISP_I_Programmers_Ma...

Lisp was initially developed for an IBM 704 computer and primitive list operations CAR (= FIRST, HEAD) and CDR (= REST, TAIL) were referencing machine architecture details of the IBM 704.


Some of the constants have changed over the years.

> (setq avogadro 6.0221367e23) ; Avogadro number

This is now standardized to exactly 6.02214076e23


Very cool, but I wonder why did he define foot as:

(setq foot (* 0.3048 m))

And not

(setq foot (* 12 inch))

It comes to the same thing, but inch is defined in metric as 2.54 cm and the foot is a derived unit of the inch. But this way it clearly spells out the dependency.

Im not criticizing, it was his library for his use, I'm just wondering if there is there a deeper meaning beyond "God enough"?


He wanted to work in meters? Closer to his final result?

  (setq km (* m 1000.0))
  (setq cm (* m 0.01))
  (setq foot (* 0.3048 m))
  (setq ft (* 0.3048 m))
  (setq mile (* 5280 foot))


The very snippet you gave there is a counterexample to your argument. He defines miles in terms of feet (which is in turn defined by meters) allowing him to use commonly known conversion factors as a sanity check, while still keeping all values in meters. If he had used his already present definition of an inch as 0.0254 meters to define feet, he could have compounded this even further. The true answer is almost certainly that he simply did whatever came to mind first, and didn't think of defining feet in terms of inches because he hadn't defined inches yet.


I mean the legal definition of the inch is 25.4 mm. All other imperial length units are derived from the inch.

Furthermore, there are more than one definition of the inch. If he worked on astronomical data from the 19th century UK, all these units would have to be changed. By tying the foot to the inch he'd only have to redefine the inch to the old British inch.


Heh, the creator of Lisp probably was "God enough" to redef constants as he saw fit.


One thing I always wonder is how people throughout history, on famous historic codebases etc, can indent things.... 90% the way, and then out of dozes of things correct there will be 3 things that are painfully off. Man, just indent them all the way if youre going to do it halfway

Ive seen this in dennis richie code, doom code, etc

Am i the only person who sees these things stand out like a sore thumb? It would drive me mad. They either all have to be not aligned, or aligned, but not 90%


Ok, a warmer Mars is desirable. But is Mars, in earth’s orbit, but on the opposite side of the sun, really closer?


Without a magbetosphere it's pretty useless. It would just lose its atmosphere again within a few hundred years.


Agreed. It's the very definition of a dead planet, and there is even a high probability that it had life before Earth.


Closer in terms of what? Distance? delta-v to get there? Time to get there?


In travel time, no. In energy cost, yes, but only if you're willing to wait a long time to get there. Also impossible to communicate with directly by radio.


> Ok, a warmer Mars is desirable.

To humans.


yeah, well that is the typical comparison one would make as a human..

you are a human, right?


So, I know that elisp has historically lacked lexical scope, so setting variables without a prefix has the potential for name clashes since even a variable setq'd into existence inside a defun will be added to the global namespace.

I did an experiment to double check it's still true in a recent-ish emacs:

  (defun my-fun ()
    (setq test-123-456 'this-is-a-test))

  (my-fun)

  test-123-456 => this-is-a-test
There is some information in the info file under elisp about lexical binding, but you can just use let to keep variables in a lexical scope.

  (defun my-other-fun ()
    (let ((test-789 'this-is-another-test))
      test-789))

  (my-other-fun)

  test-789 => *** Eval error ***  Symbol’s value as variable is void: test-789


Are equations being renamed to 'numerical facts' now?


pony := 1 floz

jigger := 1.5 floz


[flagged]


All Lisp related stories, especially those that refer to the creator of Lisp, get automagically upvoted.


Because some people find it interesting. Same as everything else on HN's front page.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: