Hacker News new | past | comments | ask | show | jobs | submit login

But Eliezer Yudkowsky just makes shit up. There's no valid scientific reason to take his concerns seriously.



Can you be more specific about which stuff you read from Eliezer? Did you read this post for example? https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

It sounds like maybe you're saying: "It's not scientifically valid to suggest that AGI could kill everyone until it actually does so. At that point we would have sufficient evidence." Am I stating your position accurately? If so, can you see why others might find this approach unappealing?


An alien invasion could kill everyone. We should build giant space lasers just in case. Safety first!


Aliens who invaded would most likely have technology much more advanced than ours. I doubt our space lasers would be of much use.

Additionally, AI advances in recent years have been unprecedented, and there's no similarly compelling warning sign for alien invasion.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: