Hacker News new | past | comments | ask | show | jobs | submit login

I have this same question about the (apparently many) AI researchers who believe it poses significant risks to humanity, yet still push forward developing it as fast as they can.



I recently listened to a journalist who spoke to many AI workers in SV. There is an alarmingly pervasive pseudo-religious attitude that they are ushering in a new form of life, and that it is their destiny to be the people who make it happen. Most of them will candidly tell you humanity may go extinct as a result (with at least a 1 in 10 chance), but they choose to plow ahead regardless.

Others appear to be in common modes of willful denial: hubris or salary-depends-on-not-knowing syndrome.


Just guessing, but I'm sure they get paid very well and receive promises from their companies that everything will be done ethically, nothing rushed, etc. We've seen now that OpenAI and Microsoft and Google care more about the business case rather than doing things ethically and carefully.


If a whistleblower for these companies came out and said "For the last decade advanced research has been conducted on extraordinarily big LLMs and they won't even give the public a clue of what it is and how it works" you would get a combination of people that a) don't care and b) vilify the companies for not being open and having some demonstration of this secret super power.

"why can't joe-schmo get his hands on this technology", "how can we trust something we can't see and use", etc.

A lot of the capabilities of these models are emerging as people discover them. I truly don't believe you can make everyone happy with this tech, but isn't it better than the general public can at least explore it?

Do people think that nobody was ever going to try to improve on transformers with more compute, more data, and more parameters? We knew splitting an atom was going to cause a big boom.... thats not really how this tech emerged.


Here's my theory: if you look at surveys, it does say a 10% chance or so of an extremely bad outcome. BUT it says a ~20% chance of an extremely good outcome, and an 80% chance of at least a neutral one. Simple cost benefit analysis.


I think they’re thinking like this: “it’s dangerous, but it’s better me than anyone else to do it”.


Because they believe the future is uncertain and possible upside exceeds the downside?


"Intelligence, uh, finds a way."




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: