Hacker News new | past | comments | ask | show | jobs | submit login

The existential risk that AI poses is first and foremost the threat that it be centralized and controlled by a closed company like OpenAI, or a small oligopoly of such companies.



I don’t think centralization is the real threat. As James Currier [1] pointed out, AI will be commoditized through open-source and model convergence, making oligopoly control unlikely.

The real challenge is standardizing safety across open models and countering malignant AI use, especially amid demographic challenges like declining fertility.

[1] https://x.com/jamescurrier/status/1884057861514485803?s=46&t...


What's the connection between malignant AI use and declining fertility?


AI + VR will most probably create addictive, lifelike experiences that may affect real-world relationships. Like TikTok and Instagram algorithms, this could reduce the desire for intimacy and worsen declining fertility rates.


That concern is your right to prioritize, but it lessens the term "existential risk" into a metaphor. The literal existential risk is the risk that AI destroys all humans in pursuit of goals that have nothing in common with human values.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: