Hacker News new | past | comments | ask | show | jobs | submit login

So, supposing there's any chance that it has consciousness, is there any sort of movement doing all it can to put the brakes on AI research? If it's true, it's literally the precursor to the worst realistic (or hypothetical, really) outcome I can fathom, which has been discussed before on HN (simulated hell, etc). I'm not sure why more people aren't concerned about it. Or is it just that there's "no way to stop progress" as they say, and this is just something we're going to learn to live with, the way we live with, say, the mistreatment of animals?



We are sufficiently far away from creating machines that humans would consider to have consciousness that it's not really a problem so far. Eventually we'll probably have to think about robot rights, but I guess we still have a few decades until they're sufficiently advanced. But judging from how we treat, eg. great apes, who are so very similar to us, I wouldn't want to be a robot capable of suffering.


I'd think that if there are people forward thinking enough to consider the consequences to humans (Elon Musk, Singularity Institute), there should be people forward thinking enough to consider the consequences to the AIs.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: