Hacker News new | past | comments | ask | show | jobs | submit login

Ecosystems are hellish malthusian processes rife with extinction. I'm unsure why they think an ecosystem of superhuman agents will be any different. What sort of selective pressures would exist in such a world that would ensure the survival of humanity? It's pretty naive to think a competitive ecology would select for anything more than very intelligent entities that value replication.



There's a very real case that any superhuman agent would prioritize its own survival over the survival of any human, since (presumably) its utility function would encourage it to continually guarantee the survival of "humanity" or something larger than any single human... and it can't guarantee anything if it's dead, right?

So the moment there's a war between two superhuman agents, either of them could end up de-prioritizing human life, more so than they might if either of them existed in isolation.

And if there's actually a starvation of resources if there are a large number of superhuman agents? Am I missing something obvious here?

If I'm not missing anything... it's painfully ironic to me that we worry about the AI Box, and yet by open-sourcing the work of the best minds in AI, we voluntarily greatly increase the probability that AI will be born outside of a box - somebody is going to be an idiot and replicate the work outside of a sandbox.

Now, despite all of this, I'm optimistic about AI and its potential. But I believe the best chance we have is when the best, most well-intentioned researchers among us have as much control as possible over designing AI. Ruby on Rails can be used to make fraudulent sites, but it's fine to open-source it since fraudulent sites don't pose an existential risk to sentient biological life. That is not necessarily the case here.


Physics. You're right that ecosystems are brutal. That's exactly why I'm not worried about AI as an existential threat to humanity.

A few years back Bill Joy was sounding the alarm on nanotechnology. He sounded a lot like Elon Musk does today. Nanobots could be a run-away technology that would reduce the world to "grey goo". But nothing like that will ever happen. The world is already awash in nanobots. We call them bacteria. Given the right conditions, they grow at an exponential rate. But they don't consume the entire earth in a couple of days, because "the right conditions" can't be sustained. They run out of energy. They drown in their own waste.

AI will be the same. Yes, machines are better than us at some things, and that list is growing all the time. But biology is ferociously good at converting sunlight into pockets of low entropy. AI such as it exists today is terrible at dealing with the physical world, and only through a tremendous amount of effort are we able to keep it running. If the machines turn on us, we can just stop repairing them.


The danger of nanotechnology is that it can be built better than biological life. It could outcompete it in it's own environment, or at least different environments.

Solar panels can collect more energy than photosynthesis. Planes can fly faster than any bird. Guns are far more effective than any animal's weapons. Steam engines can run more efficiently than biological digestion. And we can get power from fuel sources biology doesn't touch, like fossil fuels or nuclear.

We conquered the macro world before we even invented electricity. Now we are just starting to conquer the micro world.

But AI is far more dangerous. It would take many many decades - perhaps centuries - of work to advance to that level. It's probably possible to build grey goo, it's just not easy or near. However AI could be much closer given the rapid rate of progress.

If you make an unfriendly AI, you can't just shut it off. It could spread it's source code through the internet. And it won't tell you that it's dangerous. It will pretend to be benevolent until it no longer needs you.


> Solar panels can collect more energy than photosynthesis. Planes can fly faster than any bird. Guns are far more effective than any animal's weapons. Steam engines can run more efficiently than biological digestion. And we can get power from fuel sources biology doesn't touch, like fossil fuels or nuclear.

A gun isn't effective unless human loads it, aims it and pulls the trigger. All your other examples are the same. We do not have any machine that can build a copy of its self, even with infinite energy and raw materials just lying around nearby. Now consider what an "intelligent" machine looks like today: a datacenter with 100,000 servers, consuming a GW of power and constantly being repaired by humans. AI is nowhere near not needing us.


Because we haven't had the reason or ability to make self replicating machines yet. It's possible though. With AI and some robotics, you can replace all humans with machines. The economy doesn't need us.


Advanced nanotechnology is not the only possible way to achieve power: http://slatestarcodex.com/2015/04/07/no-physical-substrate-n...


Yeah, interesting. I'll just point out that my argument is not that AI can't affect the physical world. Clearly it can. It's that AI is still embodied in the physical world, and still subject to the laws of physics. We're so much more efficient and effective in the physical world, that we are not threatened by AI, even if it gets much more intelligent than it is today.


great read. thanks for that.


"If the machines turn on us, we can just stop repairing them."

Never understood this reasoning.

We are not talking about machines vs. biologic life, this is a false dichotomy. We are talking about intelligence.

Intelligence is the ability to control the environment through the understanding of it. Any solvable problem can be solved with enough intelligence.

Repairing a machine is just a problem. The only limitations for intelligence are the laws of physic.


That's my point. The laws of physics are a bitch. We like to think of the internet as a place of pure platonic ideals, where code and data are all that matter. But that ethereal realm is still grounded in matter and ruled by physics. And without bags of mostly-water working 24/7 to keep it going, the internet just falls apart.


> But they don't consume the entire earth in a couple of days, because "the right conditions" can't be sustained. They run out of energy. They drown in their own waste.

Maybe, but not necessarily, and even if they do "drown in their own waste" they might take a lot of others with them. When cyanobacteria appeared, the oxygen they produced killed off most other species on the planet at the time [1]. The cyanobacteria themselves are still around and doing fine.

[1] https://en.wikipedia.org/wiki/Great_Oxygenation_Event




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: