Hacker News new | past | comments | ask | show | jobs | submit login

> Airplanes weren't a practical means of travel in 1910, but by 1960 it was a different story, and some people in 1910 had already realized that plane travel was coming.

Indeed, arguing about the potential threats of air-travel in 1910 (let alone in 1810) would have been silly. The point isn't whether or not AI is possible (or could pose a serious threat), but whether or not discussing it as a threat given our current, near-zero, understanding of it is productive. Jaron Lanier argues that not only is it not productive, it distracts from more pressing challenges related to machine learning.

> I am an atheist, and think most religious beliefs are irrational. However, that doesn't mean that every belief that "looks like" religion (in some vague, poorly-defined way) is irrational.

You think now that religion is irrational, but when most religions were established there was little reason to believe they were. As to the "vague, poorly-defined way" all I can say is that religion has many definitions[1].

The famous anthropologist, Clifford Geertz defined it as a "system of symbols which acts to establish powerful, pervasive, and long-lasting moods and motivations in men by formulating conceptions of a general order of existence and clothing these conceptions with such an aura of factuality that the moods and motivations seem uniquely realistic."

Another famous anthropologist said (again, quoting from Wikipedia) that narrowing the definition to mean the belief in a supreme deity or judgment after death or idolatry and so on, would exclude many peoples from the category of religious, and thus "has the fault of identifying religion rather with particular developments than with the deeper motive which underlies them".

It is therefore common practice among social researchers to define religion based more on its motivation rather than specific content. If you believe in a super-human being and an afterlife and not for scientific reasons (and currently AI is not science, let alone dangerous AI), that may certainly be a good candidate for a religious or quasi-religious belief.

[1]: https://en.wikipedia.org/wiki/Religion




>The point isn't whether or not AI is possible (or could pose a serious threat), but whether or not discussing it as a threat given our current, near-zero, understanding of it is productive.

It definitely is productive. We can either slow research on AI, or we can research AI safety now. Or both. There's no reason we have to just accept our fate and do nothing, or just hope everything works out when the time comes.


That is not what we're doing. We have no idea what AI is, we have no idea about the relationship our current research has to real AI (because machine learning is not even within sight from true AI), and so we're not even sure that anything we're doing can be classified as "AI research" that we can slow down. How can we research the safety of something we know nothing about?

Currently, much of the discussion on the subject is done in various fringe forums, where they imagine AI to be a god and then discuss the safety of creating a god. You can even find reasoning that goes like this: "we don't know how capable AI can be, but it could be a god with a non-zero probability, and the danger has a negative-infinity utility, so you have a negative-infinity expected value, which means it must be stopped now". Now, this sounds like a joke to us (we know that every argument with the words "non-zero probability" and infinite utility can conclude just about anything), but the truth is that such foolishness is not far from the best we can do given how little we know of the subject.


>because machine learning is not even within sight from true AI

Well I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.

>How can we research the safety of something we know nothing about?

Even if AI uses totally unknown algorithms, that doesn't mean we can't do anything about it. The question of how to control AI is relatively agnostic to how the AI actually works. We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.


> I don't agree at all, and neither do many experts. It may not be very intelligent currently, but it's certainly getting there.

Which experts say that? Besides, it's not a question of time. Even if strong AI is achieved next year, we are still at a point when we know absolutely nothing about it, or at least nothing that is relevant for an informed conversation about the nature of its threat or how to best avoid it. So we're still not within sight today even if we invent it next month (I am not saying this holds generally, only that as of January 2016 we have no tools to have an informed discussion about AIs threats that will have any value in preventing them).

> We don't need to know the details of a machine learning algorithm to talk about reinforcement learning. We don't need to know the exact AI algorithm to talk about how certain utility functions would lead to certain behaviors.

Oh, absolutely! I completely agree that we must be talking about the dangers of reinforcement learning and utility functions. We are already seeing negative examples of self-reinforcing bias when it comes women and minorities (and positive bias towards hegemonic groups), which is indeed a terrible danger. Yet, this is already happening and people still don't talk much about it. I don't see why we should instead talk about a time-travelling strong AI convincing a person to let it out of its box and then use brain waves to obliterate humanity.

We don't, however, know what role -- if any -- reinforcement learning and utility functions play in a strong-AI. I worked with neural networks almost twenty years ago, and they haven't changed much since; they still don't work at all like our brain, and we still know next to nothing about how a worm's brain works, let alone ours.


> There's no reason we have to just accept our fate and do nothing, or just hope everything works out when the time comes.

And there's the religion.


And there's the useless comment. Please how that sentence has anything to do with religion. Out of context, it doesn't even look like it's about AI. The same sentence could appear in a discussion about climate change, or nuclear proliferation. But certainly not a religious discussion.

And after you explain that, explain how having something vaguely in common with religion automatically means it's wrong.


> And after you explain that, explain how having something vaguely in common with religion automatically means it's wrong.

No one is saying it's wrong, only that the discussion isn't scientific.

It is not only unscientific and quasi-religious; there are strong psychological forces at play, that muddy the waters further. There are so many potentially catastrophic threats that the addition of "intelligence" to any of them seems totally superfluous. Numbers are so much more dangerous than intelligence: the Nazis are more dangerous than Einstein; a billion zombies obliterate humanity; a trillion superbugs are not much more dangerous if they are intelligent or even super-intelligent; we intelligent humans are very successful for a mammal, but we're far from being the most successful (by any measure) species on Earth.

This fixation on intelligence seems very much like a power fantasy of intelligent people who really want to believe that super-intelligence implies super-power. Maybe it does, but there are things more powerful -- and more dangerous -- than intelligence. This power fantasy also helps cast a strong sense of irrational bias over the discussion. This power fantasy is palpable and easily observed when you read internet forums discussing the dangers of AI. This strong psychological bias tends to distract us from less-intelligent, though possibly more dangerous, threats. It is perhaps ironic, yet very predictable, that the people currently discussing the subject with the greatest fervor are the least qualified to do so objectively. It is not much different from poor Christians discussing how the meek are the ones who shall inherit the earth. It is no coincidence that people believe that in the future, power will be in the hands of forces resembling them; those of us who have studied the history of religions can therefore easily identify the same phenomenon in the AI-scare.


> explain how having something vaguely in common with religion automatically means it's wrong.

It doesn't. Something having religious undertones and something being incorrect are completely orthogonal, a priori.

But the fact remains that discussion of supreme AI has distinctly religious undertones. Such discussions progress in, what I believe an interested observer from another culture would deduce to be, directions distinctly influenced by the West's history of monotheism, and particularly Christianity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: