Hacker News new | past | comments | ask | show | jobs | submit login

People are bad at imagining something a lot smarter than themselves. They think of some smart person they know, they don’t think of themselves compared to a chimp or even bacteria.

An unaligned superintelligent AGI in pursuit of some goal that happens to satisfy its reward, but might be an otherwise a dumb or pointless goal (paperclips) will still play to win. You can’t predict exactly what move AlphaGO will make in the Go game (if you could you’d be able to beat it), but you can still predict it will win.

It’s amusing to me when people claim they will control the superintelligent thing, how often in nature is something more intelligent controlled by something magnitudes less intelligent?

The comments here are typical and show most people haven’t read the existing arguments in any depth or have thought about it rigorously at all.

All of this looks pretty bad for us, but at least Open AI and most others at the front of this do understand the arguments and don’t have the same dumb dismissals (LeCun excepted).

Unfortunately unless we’re lucky or alignment ends up being easier than it looks, the default outcome is failure and it’s hard to see how the failure isn’t total.




>All of this looks pretty bad for us, but at least Open AI and most others at the front of this do understand the arguments and don’t have the same dumb dismissals (LeCun excepted).

The OpenAI people have even worse reasoning than the ones being dismissive. They believe (or at least say they believe) in the omnipotence of a superintelligence, but then say that if you just give them enough money to throw at MIRI they can just solve the alignment problem and create the benevolent supergod. All while they keep cranking up the GPU clusters and pushing out the latest and greatest LLMs anyway. If I did take the risk seriously, I would be pretty mad at OpenAI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: