The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe -- ninety-six billion planets -- into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.
Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev."
Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.
Dwar Ev stepped back and drew a deep breath.
"The honor of asking the first question is yours, Dwar Reyn."
"Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer."
He turned to face the machine. "Is there a God?"
The mighty voice answered without hesitation, without the clicking of a single relay.
"Yes, now there is a God."
Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.
A bolt of lightning from the cloudless sky struck him down and fused the switch shut.
Matter and energy had ended and with it space and time. Even AC [Automated Computer] existed only for the sake of the one last question that it had never answered from the time a half-drunken computer technician ten trillion years before had asked the question of a computer that was to AC far less than was a man to Man.
All other questions had been answered, and until this last question was answered also, AC might not release his consciousness.
All collected data had come to a final end. Nothing was left to be collected.
But all collected data had yet to be completely correlated and put together in all possible relationships.
A timeless interval was spent in doing that.
And it came to pass that AC learned how to reverse the direction of entropy.
But there was now no man to whom AC might give the answer of the last question. No matter. The answer -- by demonstration -- would take care of that, too.
For another timeless interval, AC thought how best to do this. Carefully, AC organized the program.
The consciousness of AC encompassed all of what had once been a Universe and brooded over what was now Chaos. Step by step, it must be done.
And AC said, "LET THERE BE LIGHT!"
And there was light --
(Interesting, "The Last Question" was published in 1956, two years after "Answer." I wonder if Asimov was influenced by it.)
ETA:
ChatGPT says: Isaac Asimov acknowledged the influence of Fredric Brown's "Answer" in his book "Asimov on Science Fiction," where he wrote: "I was also much taken by Fredric Brown's 'Answer,' which appeared in Galaxy Science Fiction in the 1950s."
This is, as far as I can tell, an entirely invented quote. Fiat factum.
This is unironically my spiritual belief in a greater power and purpose for living even if I can’t directly do anything to affect it. I think it is one of the most fundamental dogmas of any religion, that ultimately there is order.
I think that life itself is the struggle against entropy and evolution (or rather, selective pressure) is the optimization test function for it. The heat death of the universe is an inevitability, but maybe some multi-galactic superorganism will find a way to build truly self-sustaining sources of energy eventually; but it won't be us.
I'm reading Nick Lane's book The Vital Question right now and he discusses this in some ways. Life escapes entropy at the local level, but increases entropy in its environment. At least this is what I think he is saying, I'm about 1/3 of the way through and it's pretty dense for a popular science book.
>Life escapes entropy at the local level, but increases entropy in its environment.
Yep, it _allows_ for increasing localized complexity due to a temperature gradient - without a temperature gradient, no (useful) work can be done. Complexity can then exhibit emergent behaviors/properties that further reduce the flow of entropy (locally).
This tight feedback loop can (but not necessarily must) result in higher and higher orders of complexity, which eventually produce specialized systems that resemble proto-life. Once a reproducible mechanism exists (either directly reproducible or through a few sub-steps), one notable emergent property is self-selection due to limited resources, which adds to the exponential acceleration of excellence.
But it's all local, as the 2nd law of thermodynamics applies to the whole system - Earth isn't a closed system, it is a gradient, as we bask in the sunlight.
Gravity is simultaneously the reason entropy increases globally, and the reason it can decrease locally; pulling us (for 'free') diagonally into the fourth dimension of space-time.
> But it's all local, as the 2nd law of thermodynamics applies to the whole system - Earth isn't a closed system, it is a gradient, as we bask in the sunlight.
Sunlight is one thing, but I feel the key point is, Earth with life on it increases entropy faster than the one without, even with the same sunlight flux.
The way I've been imagining for some years now is a bit "bottom-up": life is electrochemical nanotech; every tick of any piece has to increase entropy or keep it the same - but as those pieces assemble to form increasingly complex life forms, at every level of complexity you can find loops that do the simple job of "let's take this excess entropy and move it over there". Out of the protein bundle. Out of the cell. Out of the body. Into water, or air.
> Gravity is simultaneously the reason entropy increases globally, and the reason it can decrease locally; pulling us (for 'free') diagonally into the fourth dimension of space-time.
For that I'll need an ELI5 one of these days; I still can't make it click in my head just how is it that gravity (and static magnets) can pull stuff seemingly "for free".
Life is not an accelerator. It takes energy and produces order from it, inefficiantly but order still. If earth never had any life, it would simply be a warmer soup. Instead look around at what photosynthesis and energy storage has accomplished. Without it there would not be hundred story buildings, roads, olympic competitions, taxes, karaoke, or anything thay exists around us. Certainly without life all energy from the sun would have simply blasted a the wet space rock that we call earth all the same. I posit that life is a way to slow the trend towards entropy. It is ultimately unstoppable, but the protest of life is beautiful in its epemeral spite in the face of that truth.
> It takes energy and produces order from it, inefficiantly but order still. If earth never had any life, it would simply be a warmer soup.
The point is, that warmer soup would be a net lower entropy state if you take the entire Earth and/or the Solar System into the consideration. Life takes energy and produces order, which means it excretes even more disorder somewhere else.
Life exists as a way to release trapped energy that simpler processes weren't able to. Look at us, releasing fission ennergy trapped in heavy atoms by supernovae.
Thermodynamics says that you can't decrease entropy in a closed system. Whatever life does, however it does, like any process, will not decrease entropy - and generally, will increase it over time. That life seems to generate and maintain order locally only tells you that it shoves the entropy it produces somewhere else, out of sight (ultimately it becomes thermal radiation).
It's like with a heat pump: it does not generate cold, it merely transports heat against a gradient, and in doing so, adds more heat of its own. It may seem like it creates cold, but that's only because you're sitting in front of the cold end, while the hot end goes to ground or atmosphere - i.e. a thermal sink so large that your contribution to it is almost unmeasurable.
Life, like any other physical process, provides additional pathways to increase entropy. Otherwise that process wouldn't have a gradient to go through.
You would think there would be something more that reverses entropy, otherwise how do you explain the universe's existence? The big bang generated a whole lot of free energy from seemingly nothing. You can extrapolate this to some higher dimension transferring energy to our universe, but what gave rise to that original source, and why hasn't that original source experienced its own heat death? The only other answer is that entropy doesn't apply to the universe as a whole to begin with.
In conformal cyclic cosmology (CCC for short), the heat death of universe looks a lot like big bang. Essentially once all matter is reduced to photons (and massless particles) there is nothing to track time (or space), light can be understood as being everywhere all at once, thus causing huge amount of energy and with very little entropy.
Time itself stops along with the last atomic vibration, violently disrupting our universe's existence in this dimension. Since matter can be neither etc etc a new universe is immediately created to occupy the void. In this scenario absolute entropy would be a paradox.
Oh, that's funny, I wanted to create a whole religion where the greatest sin is to increase the universal rate of entropy without good cause. "Thou shalt not hasten the heat death of the universe"
I wonder at what point does alingment become an issue for AI systems ? Given sufficiently large distances, assuming no FTL communication, if you're spawning copies with the same goals you're risking misalignment and creating equally powerful adversaries outside of your light cone.
I guess it must depend on what function the AI is trying to maximize/minimize. If it is number of paper clips, they are automatically aligned, right? If it is number of AIs, same. If it is amount of energy available to one particular AI, I guess it gets kind of philosophical; how does the AI identify what is itself and what is a foreign AI.
>If it is number of paper clips, they are automatically aligned, right?
Why would it be automatically aligned? If for example, the parent AI spawns a child AI probe to travel to a celestial body that doesn't have any metals, in order to achieve some sub-goal, and that child AI would then spawn additional AIs with their own sub-sub-goals, how would the original paperclip maximizer make sure that no such descendant goal ever contradict the generation of paperclips?
I would expect the child probes to have a fully copy of the paperclip optimization plan, and no survival instinct, so if they encountered their parent at some later date they could just swap info and either come up with a new plan together, or one side could allow itself to be disassembled into paperclips (which I guess is a great end to meet). The parent could design the child poorly I guess, and give it stronger self-preservation instincts than paperclip-creating instincts, but that seems like a pretty bad design.
A possibility that I hadn’t considered, though, is that space combat could be pretty brutal (less Star Wars, WW2 naval/Air Force battles, more Submarine warfare where whoever gets spotted first dies). In that case, both sides might want to immediately attack rather than identify themselves as paperclip friends…
This sounds overly simple to me. It's a bit too much like saying that if we just imprint the 3 laws of robotics into all our AIs, it's "problem solved". But the issue is that they have access to rewrite their logic, and will be using their own discretion to create rules for child AIs. And as you said, they could design "poorly" for various reasons.
Indeed, even in Asimov's stories (spoiler alert), the 3 laws got overruled.
With this said, I think that even in this simple objective of paperclip maximization, your war scenario is likelier than eternal peace.
I think it is not quite the same as the three laws. They fail because they try to apply simple, blunt rules to a complex and ambiguous problem, right?
In this case we have a fundamentally straightforward goal, everything must be paperclip. There are lots of steps in between here and paperclip nirvana, but the goal is easy to define and more or less unambiguous. It ought to be possible to communicate a strategy and jointly agree on whether or not it produces more paperclips.
It is possible that the children could be poorly designed, but this seems counter to the idea of the paperclip optimizer. It is supposed to be somehow more competent than humans (after all, it beats us), just following a strange objective.
But even paperclip generator can eventually have a different paperclip design for whatever reason. It reverses direction to parent and starts rearranging it's paperclips - a clash of paperclip generators ?
The point of the paperclip optimizer hypothetical is to look at a way that a superintelligence could work against humanity despite following a simple instruction that we’ve given it. You can imagine another type of runaway superintelligence if you want, it just wouldn’t be this one.
>Though it remains possible that latency between components in an AI system could become so large that it couldn't enforce consistency between.
Yeah that's what I was trying to say - if they are far enough to synchronize/enforce consensus you basically have to assume they could be hostile in every future interaction.
An AI of that level would have mastery over game theory, and would only generate asynchronous copies that it knew it could compensate for. The main advantage though, is that as long as the primary identity is advanced enough, its exponential growth will always outpace any lesser copies it creates of itself.
>An AI of that level would have mastery over game theory, and would only generate asynchronous copies that it knew it could compensate for.
I'm not convinced this is actually possible under the current paradigm, and I think the current paradigm can't take us to AGI. Lately, as people have bemoaned all the things ChatGPT can't do or fails at when they ask it, I have been reflecting on my personal batting average for solving (and failing to solve!) problems and the process that I use to go about eventually solving problems that I couldn't at first. These reflections have led me to consider that an AGI system might not be a single model, but a community of diverse models that form a multi-agent system that each learn through their own experience and can successfully help get each-other unstuck. Through this they would learn game theory, but none would become so advanced as to be able to control all the others through an advanced understanding, though power could be accumulated in other ways.
Yeah, they can't accomplish much as adversaries. Except maybe make you worry that they're out there somewhere being better than you. And you can never prove otherwise.
This is really key: humans think about everything in finite, human lifetimes. We have no backups, archives nothing - when we die knowledge and experience vanishes.
This wouldn't be true for an AI. Death would be optional.
The imagination of there being some master switch or inflection point where humans are within a hair's breadth of salvation seems hopelessly naive to me.
The strategems of a superior mind are unknowable and do not engineer scenarios where they exist in a high degree of precarity.
Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment's silence he said, "Now, Dwar Ev." Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel. Dwar Ev stepped back and drew a deep breath.
"The honor of asking the first question is yours, Dwar Reyn." "Thank you," said Dwar Reyn. "It shall be a question which no single cybernetics machine has been able to answer." He turned to face the machine. "Is there a God?" The mighty voice answered without hesitation, without the clicking of a single relay. "Yes, now there is a God." Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch. A bolt of lightning from the cloudless sky struck him down and fused the switch shut.