Hacker News new | past | comments | ask | show | jobs | submit login

So far as I'm aware, most proponents of recursively self improving AIs don't necessarily think they can improve without upper limit (as in perpetual motion). They just think they can improve massively and quickly. Nuclear power lasts a hell of a long time and releases a hell of a lot of energy very fast (see: stars) but that's not perpetual motion/infinite energy either. And prior to those theories being developed it would seem inconceivable for so much energy to be packed into such a small space. But it was. Could be for AI too.

Not saying the parallel actually carries any meaning, just pointing out that you can make multiple analogies to physics and they don't really tell you anything one way or the other.




There are limits on resource management processes that are far too frequently ignored. "The computer could build it's own weapons!" -- but that would requires secretly taking over mines and building factories and processing ores and running power plants, etc. All of which require human direction. And even if they didn't, we'd need a good reason to network all these systems together, fail to build kill switches, and fail to monitor them, and fail to notice when our resources were being redirected to other purposes, and not have any backup systems in place whatsoever.

There are just so many obstacles in place, that we'd all already have to be brain-dead for computers to have the ability to kill us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: