Hacker News new | past | comments | ask | show | jobs | submit login

> To assume the creation of AI with "a positive feedback loop that goes nobody knows how far" without humans first understanding how seems more of a belief in magic to me.

Not really. This is pretty much a definition of a positive feedback loop.

To give a very simplified example, imagine that a mind of IQ N is able to create, at best a mind of IQ N+10. So say, the smartest human alive has 150 IQ. He goes and creates an AI that has 160 IQ, which then goes on to create a 170-IQ AI, ad infinitum.

Of course you could argue the relationship is different. Maybe the ith mind can create at best an N+(1/2)^i mind, at which point the whole series will hit an asymptote, a natural limit caused by diminished returns. But it would be one hell of a coincidence if humans were close to that natural limit.

So basically, what we need to do to potentially start intelligence explosion is to figure out how to make a general AI that is just a little bit smarter than us. Which seems entirely possible, given that we can use as much hardware as we like, making it both larger and faster than human brains.




I understand the concept of creating something exceedingly more generally intelligent than its creator, I'm simply suggesting it's not possible. Many people assume that it is, and we'll have to agree to disagree. But even if I'm wrong and it does become possible, think about how unlikely it would be for a human to accidentally accomplish this.

Also, if AI is to be smarter than humans, it will know it could potentially be wrong about anything. Armed with that knowledge, how much smarter can it really be?


> Also, if AI is to be smarter than humans, it will know it could potentially be wrong about anything. Armed with that knowledge, how much smarter can it really be?

That's not a big leap. In fact, we humans know this already, and we've even quantified it nicely, and called it probability theory.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: