Hacker News new | past | comments | ask | show | jobs | submit login

Superintelligent models need not be LLMs. They could work similar to animals, which predict future experiences, not text (predictive coding). There is no LLM-like human bound in predicting reality.



That may be true, but I can't speak to any research being conducted in that area. My point is that the hype around dangers of super-intelligence seems to have been spurred by improvements to large language models, even though large language models don't seem (to me) a suitable way to develop something with super-intelligence.


It's more that the general pace of innovation has sped up. Three years ago something like ChatGPT would have similarly been dismissed as science fiction. So we probably shouldn't dismiss the possibility that we will have something far better than LLMs in another three years.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: