Wow, thank you! This made my day: "Consider the possibility that introducing an arbitrarily small tame asteroid
could throw a planet out of the solar system. This would be a different kind
of instability than those that have been so far considered. It would be of
mathematical interest to prove that a system consisting of a sun and three
planets could be disrupted by a single arbitrarily small tame asteroid."
The sci-fi book practically writes itself. A wealthy entrepreneur puts an asteroid in motion to move mars and sets up a long term foundation to keep it operational for the next 40,000 years. In conjunction with the orbital adjustment, terraforming begins and the first enclosed permanent settlements are founded. After the orbital changes are complete, the new planet is open for human development and is renamed from Mars to Musk, after its visionary founder.
Of course things go horribly wrong, and Mars crashes into earth. The moral of the story being if you don't have ultra-superior technology and experience doing the same thing outside of your own solar system and checking the results to adjust your procedures, then nudging things and seeing what happens at planetary scale is playing with fire.
Is there any better embodiment of entrepreneurial capitalism than stealthily appropriating an entire planet, privatising it, and then naming it after one's self? ... I'd be hard pressed to think of one.
Hans Moravec is really into space elevators. Is there some correlation between being an AI researcher and having an interest in space mega-engineering?
Almost certainly, simply because they both strongly correlate with being geeks. And SF geeks in particular. And the very particular sort of hard SF geek who feels right at home at Shock Level 4 [1].
yes, it's known as the nerd space elevator strange attractor. It is frequently observed in nature with its related chaotic entities, the national-scale DC power grid attractor, and the singularity attractor (some argue that all these attractors are really just specific instances of the singularity attractor).
It's a nice hard problem with lots of juicy subproblems that you can work on even through the central problem (materials science) is nowhere near being solvable.
I wonder what will happen to the others planets, on a long scale... in a dynamical (chaotic) system stability is not granted, probably a bunch of asteroids-tools are needed to correct the deviation of all others planets form a desired configuration.
LLM then Google? "The greening of Mars" is not by Stanley Robinson, but by James Lovelock and Michael Allaby. It doesn't appear to feature moving Mars, but terraforming it using 80's technology. I don't think "Green Mars" by Robinson has Mars being moved either.
There's nothing on Google for Benford and "Martian Transfer", though it's possible he wrote such a story with a different title.
An easier job would be maintaing the habitability of the near-perfect planet we already have x) but it seems we can't even to that! Seems silly when SV tech bros say terraforming other planets will be what "saves humanity" or some such vs.
What it gives us is a backup plan for humanity. There are many reasons the Earth could become uninhabitable, hardly all of them caused by man. For example, an asteroid hit.
0. climate change has already entered an irreversible spiral
1. decades from now, the irreversible spiral having deepened and accelerated, civilization as we know it has been destroyed, leaving no authorities to do anything about climate change
2. decades to centuries from now, climate change makes earth mostly uninhabitable
3. hundreds or thousands of years in the future, if they weren't all dead, humans perhaps would have been able to maintain a self-sustaining colony of all for humanity another planet
there's not a realistic path for humanity to survive an earth-destroying asteroid, and there never will be
Just need to reduce the solar incidence, not eliminate it.
The same technology could be used to reduce gw on the earth:
1. an orbiting sunshade
2. putting something in the atmosphere that makes it more reflective, like sulfur dioxide
I wonder how big a sunshade would need to be to reduce temps by, say, one degree. A specific ___location could be shaded by putting the shade in geosynchronous orbit. To finance the sunshade, put a Starbucks or Coca-Cola logo on it.
The kind of sunshades that are reasonably being proposed are not large enough to see from the earth without a very powerful telescope.
A single giant megaproject sunshade is not feasible. But it is possible to make many smaller ones, each diminishing insolation by just a few fractions of a percent.
Maybe it's for the better that McCarthy passed away in 2011; if he were still alive, he might have gotten a grant from SpaceX to put his scheme into action.
Some may find it interesting that John McCarthy coined the term Artificial Intelligence and was called a/the father of AI before the deep learning folks took over. This AI referred to the symbolic flavor not the connectionist one.
Nowadays (god-)fathers of AI are Geoff Hinton and Yann LeCun and others, but 20 years ago things were very different…
I was a kid back then, so not really aware of the cutting edge research happening in those days, but here’s for example the description of Russel & Norvig’s “Artificial Intelligence: A Modern Approach (2nd edition, published 2003)”:
“The long-anticipated revision of this best-selling book offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Intelligent Agents. Solving Problems by Searching. Informed Search Methods. Game Playing. Agents that Reason Logically. First-order Logic. Building a Knowledge Base. Inference in First-Order Logic. Logical Reasoning Systems. Practical Planning. Planning and Acting. Uncertainty. Probabilistic Reasoning Systems. Making Simple Decisions. Making Complex Decisions. Learning from Observations. Learning with Neural Networks. Reinforcement Learning. Knowledge in Learning. Agents that Communicate. Practical Communication in English. Perception. Robotics. For those interested in artificial intelligence.”
By necessity, textbooks lag the actual work in most fields. Nonetheless the titular “modern approach” is the statistical reasoning (plus a heavy focus on search over inference).
> This AI referred to the symbolic flavor not the connectionist one
that's not a fair characterization at all. The title contains the term "Artificial Intelligience", and soon after item 3 in the proposal was "Neuron Nets". The word symbolic is not used at all, although the term Abstractions is used.
A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH PROJECT ON ARTIFICIAL INTELLIGENCE
J. McCarthy, Dartmouth College
M. L. Minsky, Harvard University
N. Rochester, I.B.M. Corporation
C.E. Shannon, Bell Telephone Laboratories
August 31, 1955
We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
The following are some aspects of the artificial intelligence problem:
1 Automatic Computers
If a machine can do a job, then an automatic calculator can be programmed to simulate the machine. The speeds and memory capacities of present computers may be insufficient to simulate many of the higher functions of the human brain, but the major obstacle is not lack of machine capacity, but our inability to write programs taking full advantage of what we have.
2. How Can a Computer be Programmed to Use a Language
It may be speculated that a large part of human thought consists of manipulating words according to rules of reasoning and rules of conjecture. From this point of view, forming a generalization consists of admitting a new word and some rules whereby sentences containing it imply and are implied by others. This idea has never been very precisely formulated nor have examples been worked out.
3. Neuron Nets
How can a set of (hypothetical) neurons be arranged so as to form concepts. Considerable theoretical and experimental work has been done on this problem by Uttley, Rashevsky and his group, Farley and Clark, Pitts and McCulloch, Minsky, Rochester and Holland, and others. Partial results have been obtained but the problem needs more theoretical work.
4. Theory of the Size of a Calculation
If we are given a well-defined problem (one for which it is possible to test mechanically whether or not a proposed answer is a valid answer) one way of solving it is to try all possible answers in order. This method is inefficient, and to exclude it one must have some criterion for efficiency of calculation. Some consideration will show that to get a measure of the efficiency of a calculation it is necessary to have on hand a method of measuring the complexity of calculating devices which in turn can be done if one has a theory of the complexity of functions. Some partial results on this problem have been obtained by Shannon, and also by McCarthy.
5. Self-lmprovement
Probably a truly intelligent machine will carry out activities which may best be described as self-improvement. Some schemes for doing this have been proposed and are worth further study. It seems likely that this question can be studied abstractly as well.
6. Abstractions
A number of types of ``abstraction'' can be distinctly defined and several others less distinctly. A direct attempt to classify these and to describe machine methods of forming abstractions from sensory and other data would seem worthwhile.
7. Randomness and Creativity
A fairly attractive and yet clearly incomplete conjecture is that the difference between creative thinking and unimaginative competent thinking lies in the injection of a some randomness. The randomness must be guided by intuition to be efficient. In other words, the educated guess or the hunch include controlled randomness in otherwise orderly thinking.
In addition to the above collectively formulated problems for study, we have asked the individuals taking part to describe what they will work on. Statements by the four originators of the project are attached.
Of course one can do this and it might work as a stopgap solution and abstraction layer until something different comes along and I have done similar, but ultimately this will not be very efficient. It might be a better, albeit less beautiful, solution to use any low level language inplemented linear algebra bindings for the lisp at hand. Hopefully there existing one.
Now we know: John McCarthy used (x)emacs. Clearly an important revelation in the ongoing browser wars.
/s (and not to minimize how genuinely interesting this is)
What's interesting to me is that the creator of Lisp found Emacs Lisp acceptable for practical general use. (Although as noted this file can be loaded verbatim into Common Lisp and work fine.)
McCarthy did not invent Lisp with the expectation that it even could be used as a language for real computers. Steve Russell had to convince him that McCarthy's theoretical definition of Lisp would actually work by actually implementing the interpreter himself.
I don't think McCarty would have been too snobbish in dialects, considering he wasn't expecting a practical language at all :D
MacCarthy was definitely conducting Lisp as a language implementation project. He was just surprised that his specification of the evaluation could be literally translated to code and executed; at first he thought Russell misunderstood that the spec was just for people to read, not for the machine. Someone was supposed to read that and implement the behavior.
It's kind of hard to understand why MacCarthy was surprised. Anyone sitting down to follow the spec and make it work on the machine would soon realize that they are just mechanically converting the expressions to code. Like where the specification calls APPLY with certain arguments, your machine code has to obtain those argument values and call APPLY in the same way. It will soon be obvious that you're just hand-coding exactly what is written. People were already doing that. FORTRAN is "formula translator"; before Fortran, formulas had to be translated to code by hand. Russell was just "fortranning" the formulas in the Lisp specification, like you would arithmetic formulas.
> McCarthy did not invent Lisp with the expectation that it even could be used as a language for real computers.
He was expecting a practical language and was designing one. Lisp was from day zero a project to implement a real programming language for a real computer.
Earlier he experimented with IPL and also list processing / functional programming in Fortran.
The original plan was to implement a Lisp compiler. At first the Lisp code, which McCarthy was experimenting with, was manually translated to machine code. So before the first Lisp compiler or interpreter existed, he was experimenting with how to represent and make the code running on a computer.
He then invented a Lisp interpreter in Lisp. Then came up the idea (-> Steve Russell) to use this EVAL as a base for an interpreter implementation, which was also implemented by manually translating the Lisp code to machine language. McCarthy at first thought that it was not possible to use this Lisp evaluator written in Lisp as a base, but Steve Russell implemented it.
Around 1962 then a Lisp compiler followed.
The paper by Herbert Stoyan describes the history of McCarthy's Lisp project:
> considering he wasn't expecting a practical language at all :D
He was expecting a practical language, since he was designing one and the team was implementing it. He designed data structures, core functions, evaluation, garbage collection as memory management, input and output, ... The language was to be used as a list processing language for research in the upcoming field of Artificial Intelligence: game play, natural language dialogs, symbolic maths (integration, ...), logic reasoning, computer vision, robotics, ...
He wrote several design documents on the language and its implementation. There even is the Lisp I Programmer's Manual from March 1960:
Lisp was initially developed for an IBM 704 computer and primitive list operations CAR (= FIRST, HEAD) and CDR (= REST, TAIL) were referencing machine architecture details of the IBM 704.
Very cool, but I wonder why did he define foot as:
(setq foot (* 0.3048 m))
And not
(setq foot (* 12 inch))
It comes to the same thing, but inch is defined in metric as 2.54 cm and the foot is a derived unit of the inch. But this way it clearly spells out the dependency.
Im not criticizing, it was his library for his use, I'm just wondering if there is there a deeper meaning beyond "God enough"?
The very snippet you gave there is a counterexample to your argument. He defines miles in terms of feet (which is in turn defined by meters) allowing him to use commonly known conversion factors as a sanity check, while still keeping all values in meters. If he had used his already present definition of an inch as 0.0254 meters to define feet, he could have compounded this even further. The true answer is almost certainly that he simply did whatever came to mind first, and didn't think of defining feet in terms of inches because he hadn't defined inches yet.
I mean the legal definition of the inch is 25.4 mm. All other imperial length units are derived from the inch.
Furthermore, there are more than one definition of the inch. If he worked on astronomical data from the 19th century UK, all these units would have to be changed. By tying the foot to the inch he'd only have to redefine the inch to the old British inch.
One thing I always wonder is how people throughout history, on famous historic codebases etc, can indent things.... 90% the way, and then out of dozes of things correct there will be 3 things that are painfully off. Man, just indent them all the way if youre going to do it halfway
Ive seen this in dennis richie code, doom code, etc
Am i the only person who sees these things stand out like a sore thumb? It would drive me mad. They either all have to be not aligned, or aligned, but not 90%
In travel time, no. In energy cost, yes, but only if you're willing to wait a long time to get there. Also impossible to communicate with directly by radio.
So, I know that elisp has historically lacked lexical scope, so setting variables without a prefix has the potential for name clashes since even a variable setq'd into existence inside a defun will be added to the global namespace.
I did an experiment to double check it's still true in a recent-ish emacs:
lmfao, anywhere I can read more about McCarthy's plans for this?