Hacker News new | past | comments | ask | show | jobs | submit login

"Cars can't make moral decisions."

There are moral problems that I'd be uncomfortable putting to a computer.

This isn't one of them. Am I missing something, is there something wrong with simply modeling likely impacts and trying to cause the least harm to the fewest number of people? That's what I would do, only with worse reaction time and less information.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: