Hacker News new | past | comments | ask | show | jobs | submit login

$1B in committed funding. Just, wow.

Side note: I wonder if the Strong AI argument can benefit from something akin to Pascal's Wager, in that the upside of being right is ~infinite with only a finite downside in the opposing case.





While a great short story and should be required reading for sci-fi fans, there's a big difference between the singularity and omnipotence.


no there isn't, if you include time.

lets say that a general AI is developed and brought online (the singularity occurs). Lets also say that it has access to the internet so it can communicate and learn, and lets also say that it has unlimited amount of storage space (every harddrive in every device connected to the internet).

at first the AI will know nothing, it will be like a toddler. than, as it continues to learn and remember, it will become like a teenager, than like an adult in terms of how much it knows. Than it will become like an expert.

but it doesn't stop there! a general AI wouldn't be limited by 1) storage capacity (unlike human's and their tiny brains that can't remember where they put their keys) or 2) death (forgetting everything that it knows).

so effectively a general AI, given enough time, would be omnipotent because it would continually learn new things forever.


Or maybe the AI would fracture into warring components after every network partition. Maybe it would be unable to maintain cohesion over large areas due to the delay imposed by the speed of electrical communications.

Why should one hypothetical be assumed true and not the other?


Sorry, how did you go from finite hard drives (all of those that have been produced) to unlimited storage capacity?


It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly.

The high-stakes wager isn't success vs failure in creating strong AI, it's what happens if you do succeed.


The framing here is: "What are the implications of doing nothing if you are right (about the inevitability of a malicious strong AI, in this case), compared to the implications about being wrong and still doing nothing?"


Pascal's wager is a fallacy. What benefit could it bring to the discussion?


semi-off-topic: after Google invested $1B in Uber, I knew they were doing it for the self-driving car long play. How much of that 1B is directly going to self-driving AI at Uber?


The vast majority is no doubt going to subsidizing rides in an attempt to achieve market dominance.


Drivers aren't profitable on their own?


Not when Uber's competitors are subsidizing drivers.


Finite downside? What about Skynet?


Technically, even the extinction of humanity is a finite downside.

You would have to posit a sort of hell simulation into which all human consciousnesses are downloaded to be maintained in torment until the heat-death of the universe for it to be an equivalent downside.


You have the poles reversed.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: