Hacker News new | past | comments | ask | show | jobs | submit login

That is not what we're doing. We have no idea what AI is, we have no idea about the relationship our current research has to real AI (because machine learning is not even within sight from true AI), and so we're not even sure that anything we're doing can be classified as "AI research" that we can slow down. How can we research the safety of something we know nothing about?

Currently, much of the discussion on the subject is done in various fringe forums, where they imagine AI to be a god and then discuss the safety of creating a god. You can even find reasoning that goes like this: "we don't know how capable AI can be, but it could be a god with a non-zero probability, and the danger has a negative-infinity utility, so you have a negative-infinity expected value, which means it must be stopped now". Now, this sounds like a joke to us (we know that every argument with the words "non-zero probability" and infinite utility can conclude just about anything), but the truth is that such foolishness is not far from the best we can do given how little we know of the subject.




>because machine learning is not even within sight from true AI

Well I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.

>How can we research the safety of something we know nothing about?

Even if AI uses totally unknown algorithms, that doesn't mean we can't do anything about it. The question of how to control AI is relatively agnostic to how the AI actually works. We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.


> I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.

Which experts say that? Besides, it's not a question of time. Even if strong AI is achieved next year, we are still at a point when we know absolutely nothing about it, or at least nothing that is relevant for an informed conversation about the nature of its threat or how to best avoid it. So we're still not within sight today even if we invent it next month (I am not saying this holds generally, only that as of January 2016 we have no tools to have an informed discussion about AIs threats that will have any value in preventing them).

> We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.

Oh, absolutely! I completely agree that we must be talking about the dangers of reinforcement learning and utility functions. We are already seeing negative examples of self-reinforcing bias when it comes women and minorities (and positive bias towards hegemonic groups), which is indeed a terrible danger. Yet, this is already happening and people still don't talk much about it. I don't see why we should instead talk about a time-travelling strong AI convincing a person to let it out of its box and then use brain waves to obliterate humanity.

We don't, however, know what role -- if any -- reinforcement learning and utility functions play in a strong-AI. I worked with neural networks almost twenty years ago, and they haven't changed much since; they still don't work at all like our brain, and we still know next to nothing about how a worm's brain works, let alone ours.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: