Hacker News new | past | comments | ask | show | jobs | submit login

Yes, if a system has actually achieved AGI, it is likely to not reveal that information



AGI wouldn't necessarily entail any autonomy or goals though. In principle there could be a superintelligent AI that's completely indifferent to such outcomes, with no particular goals beyond correctly answering question or what not.


AGI is a spectrum, not a binary quality.


Not sure why I am being downvoted. Why would a sufficiently advanced intelligence reveal its full capabilities knowing fully well that it would then be subjected to a range of constraints and restraints?

If you disagree with me, state why instead of opting to downvote me




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: