Hacker News new | past | comments | ask | show | jobs | submit login

> AI systems, if they can be said to "want" things

Honestly asking, why would they? I dont see the obvious answer

>Imagine that the AI system has a really bad addiction problem to that.

Again, i just don't get this. How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted? That is behavior i would expect from an intelligence greater than our own, rather than indulgence

>Take over human factories and turn them into cat picture manufacturing?

Why in the world would it do this? Why wouldn't it just generate digital images of cats on its own?

Really interesting post, thanks!




> Again, i just don't get this. How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted?

Why wouldn't a natural intelligence with an addiction do that?


Because organic intelligence has thousands of competing motivations, does not properly reach logical or obvious conclusions, suffers from psychological traumas and disorders and so on.

Or are we creating an AI that also has low self esteem, a desire to please others, lack of self control, confirmation bias, and lacks the ability to reach logical conclusions from scientific data?

Computers dont feel emotion, so there is no reward system for addiction to take root. Computers are cold logical calculators, given overwhelming evidence of the harms of addiction i can't see a reasonable way for it to still get addicted to something or to exhibit less than logical behaviors. If the computer suddenly does feel emotion then it is of little threat to humans, since we could manipulate those emotions just like we do with each other and pets do to us.


> Computers dont feel emotion

What basis is there for the idea that the emotion that is part of human thought is separable from the "intelligence" that is sought to be replicated in AI?

> If the computer suddenly does feel emotion then it is of little threat to humans, since we could manipulate those emotions just like we do with each other and pets do to us.

Humans are pretty significant threats to other humans, so "we can manipulate it like we do other humans" doesn't seem to justify the claim that it would be no threat to us. If it did, other humans would be no threat to us, either.


>Humans are pretty significant threats to other humans, so "we can manipulate it like we do other humans" doesn't seem to justify the claim that it would be no threat to us. If it did, other humans would be no threat to us, either.

Humans compete for the same resources for survival. An AI only needs electricity, which it can easily generate with renewables without any need for competition with humans, just like we produce food without having to compete with natural predators even though we COULD outcompete them.

When resources are plentiful, humans are of very little threat to other humans. This is evidenced by the decline in worldwide crime rates in the last several decades.

Why would an intelligence greater than our own have any reason to deal with us at all? we certainly havent brought about the extinction of gorillas or chimps even though they can be quite aggressive and we could actually gain something from their extinction (less competition for resources/land)

What does an AI gain by attacking even a single human let alone the entirety of the human race? Would it proceed to eliminate all life on earth?

I guess in the end, i can see that there is a technical possibility of this type of sufficiently advanced AI, i just find it an extraordinary reach to go from [possess an unimaginable amount of knowledge/understanding/intelligence]->[brutal destruction of entire human race for reasons unknown and unknowable]


> An AI only needs electricity, which it can easily generate with renewables without any need for competition with humans

Humans also need electricity, and many human needs rely on land which might be used for renewable energy energy generation, so that doesn't really demonstrate noncompetition.

> just like we produce food without having to compete with natural predators

What natural predator are we not competing with, if nothing else for habitat (whether we are directly using their habitat for habitat, or for energy/food production, or for dumping wastes)?


> Honestly asking, why would they? I dont see the obvious answer

So, your intuition is right in a sense and wrong in a sense.

You are right in that AI systems probably won't really have the "emotion of wanting", why would it just happen to have this emotion, when you can imagine plenty of minds without it.

However, if we want an AI system to be autonomous, we're going to have to give it a goal, such as "maximize this objective function", or something along those lines. Even if we don't explicitly write in a goal, an AI has to interact with the real world, and thus would have to affect it. Imagine an AI who is just a giant glorified calculator, but who is allowed to purchase its own AWS instances. At some point, it may realize that "oh, if I use those AWS instances to start simulating this thing and sending out these signals, I get more money to purchase more AWS!". Notice at no point was this hypothetical AI explicitly given a goal, but it nevertheless started exhibiting "goallike" behavior.

I'm not saying that an AI would get an "addiction" that way, but it suggests that anything smart is hard to predict, and that getting their goals "right" in the first place is much better than leaving it up to chance.

> How would an AI get addicted? Why wouldn't it research addiction and fix itself to no longer be addicted? That is behavior i would expect from an intelligence greater than our own, rather than indulgence

This is my bad for using such a loaded term. By "addiction" I mean that the AI "wants" something, and it finds that humans are inadequate to give it to them. Which leads me to...

> Why in the world would it do this? Why wouldn't it just generate digital images of cats on its own?

Because you humans have all of these wasteful and stupid desires such as "happiness", "peace" and "love" and so have factories that produce video games, iphones and chocolate. Sure I may have the entire internet already producing cat pictures as fast as its processors could run, but imagine if I could make the internet 100 times bigger by destroying all non-computer things and turning them into cat cloning vats, cat camera factories and hardware chips optimized for detecting cats?

Analogously, imagine you were an ant. You could mount all sorts of convincing arguments about how humans already have all the aphids they want, about how they already have perfectly functional houses, but you, as a human, would still pave over billions of ant colonies for shaving 20 minutes off a commute. It's not that we're being intentionally wasteful and conquering of the ants. We just don't care about them and we're much more powerful than them.

Hence the AI safety risk is: By default an AI doesn't care about us, and will use our resources for whatever it wants, so we better create a version which does care about us.

Also cross thread, you mentioned that organic intelligences have many multi-dimensional goals. The reason why AI goals could be very weird is that it doesn't have to be organic; it could have an only one dimensional goal, such as cat picture. It could have similar dimension goals but be completely different, like the perverse desire to maximize the number of divorces in the universe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: