I don't know who this guy is and what he does. I do know this
> 1) Nuclear weapons are not smarter than humanity.
> 2) Nuclear weapons are not self-replicating.
is a nonsense claim. Nuclear weapons are not yet smarter than humanity or self-replicating, in the same way that AGI is not yet a realistic threat. These are comic-book arguements without basis in reality, probably even Hideo Kojima shaking his head at this interpretation of nuclear politics and AI.
The nuclear weapons that are currently-manufactured and ready-to-launch stand a bigger immediate threat than any LLM on the market today. It's a no-brainer. Hypothesizing future AGI threats is a useless exercise, in the same way that hypothesizing future nuclear weapons is futile and pointless.
Current AI is smarter than humans at chess and go. Even if humanity's survival were at stake, the collective resources of the human species could not beat an AI at a game of chess or go.
There is a reason people worry about smart AI and not about smart nuclear weapons: AI researchers are rushing to give AIs a comprehensive knowledge of reality. In other words, for AIs now on the drawing board, all of reality is their game board -- they're not restricted to chess or go.
The usual response to what I just wrote is that the humans can just turn the AI off. The reply to that is that that works as long as the humans get a sufficiently stark warning that the AI needs turning off. But just as a chess AI knows that it would be beneficial (to the AI) to, e.g., prevent its opponents from connecting his rooks, some of the AIs on the drawing board now will know a lot about the humans and in particular can predict what sorts of warning signs might motivate the humans to choose to turn the AI off. The AI will want to prevent being turned off (because it knows that its being turned off will be bad for its "score" on the task it has been programmed to do).
The usual response to that is, "AI researchers are not stupid: they will make sure that the AI will not want to prevent the humans from turning it off." And my reply is that AI researchers do not know how to do that: the AI training process (e.g., minimizing a loss function) differs in important ways from giving the AI a goal (or removing a goal from the AI). No one knows how reliably to aim an AI at a goal. AI researchers understand AI vastly less well than physicists understand nuclear weapons.
This first assumes that you give AI control over it's own operating environment, which is a mistake. AI may eventually try to block humans from destroying it, but in the context of contemporary AI that's about as valuable as Hal 9000 singing Daisy.
Current AI is smarter than humans at chess, go, and maybe writing prose. I don't think that's as monumental as people purport.
> 5) You can calculate how powerful a nuclear weapon will be before setting it off.
> 7) It would be hard to do a full nuclear exchange by accident and without any human being having decided to do that.
> 17) When somebody raised the concern that maybe the first nuclear explosion would ignite the atmosphere and kill everyone, it was promptly taken seriously by the physicists on the Manhattan Project, they did a physical calculation that they understood how to perform, and correctly concluded that this could not possibly happen for several different independent reasons with lots of safety margin.
Giving nuclear-weapons developers and scientists a lot of undue credit here
Eliezer has been throwing a lot of arguments/examples at the wall to try to see what sticks, and personally I think this is one of his better attempts at articulating why he is so scared of AGI. I don't necessarily think all the points in this list are entirely valid, but they are at least reasonable and a direct comparison to the thing most humans are most scared of when it comes to global apocalypse scenarios.
> 1) Nuclear weapons are not smarter than humanity.
> 2) Nuclear weapons are not self-replicating.
is a nonsense claim. Nuclear weapons are not yet smarter than humanity or self-replicating, in the same way that AGI is not yet a realistic threat. These are comic-book arguements without basis in reality, probably even Hideo Kojima shaking his head at this interpretation of nuclear politics and AI.
The nuclear weapons that are currently-manufactured and ready-to-launch stand a bigger immediate threat than any LLM on the market today. It's a no-brainer. Hypothesizing future AGI threats is a useless exercise, in the same way that hypothesizing future nuclear weapons is futile and pointless.