Hacker News new | past | comments | ask | show | jobs | submit login

>because machine learning is not even within sight from true AI

Well I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.

>How can we research the safety of something we know nothing about?

Even if AI uses totally unknown algorithms, that doesn't mean we can't do anything about it. The question of how to control AI is relatively agnostic to how the AI actually works. We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.




> I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.

Which experts say that? Besides, it's not a question of time. Even if strong AI is achieved next year, we are still at a point when we know absolutely nothing about it, or at least nothing that is relevant for an informed conversation about the nature of its threat or how to best avoid it. So we're still not within sight today even if we invent it next month (I am not saying this holds generally, only that as of January 2016 we have no tools to have an informed discussion about AIs threats that will have any value in preventing them).

> We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.

Oh, absolutely! I completely agree that we must be talking about the dangers of reinforcement learning and utility functions. We are already seeing negative examples of self-reinforcing bias when it comes women and minorities (and positive bias towards hegemonic groups), which is indeed a terrible danger. Yet, this is already happening and people still don't talk much about it. I don't see why we should instead talk about a time-travelling strong AI convincing a person to let it out of its box and then use brain waves to obliterate humanity.

We don't, however, know what role -- if any -- reinforcement learning and utility functions play in a strong-AI. I worked with neural networks almost twenty years ago, and they haven't changed much since; they still don't work at all like our brain, and we still know next to nothing about how a worm's brain works, let alone ours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: