Hacker News new | past | comments | ask | show | jobs | submit login

I think we can’t just dismiss the impact of sentience by “it’s not sentient”. A mere chatbot is ”relevantly sentient” sociologically, when at least some people, at least some of the time, feel that it is.

This has clearly happened. Although it’s clear that it’s not technically sentient in any sense, the most interesting reasons we are wondering about “when will we have sentient AI??” is for sociological reasons, not technological ones. The sociological impact of people caring about AIs like they care about pets or humans is around the corner, and might be bigger than we expect, even if the AIs are merely fancy autocomplete bots.

People cared about tamagotchi. Now consider a tamagotchi people thought they had a deep existential conversation with or a tamagotchi that talked them out of a suicide.




Given the trend of how tech people generally react to AI advances, "real" general artificial intelligence will only be accepted to exist when it becomes so complicated that nobody understands how it works even in principle.

Once they know how something works it's not "magic", and then at a breakthrough they'll pull an about face after learning how it works and go around mocking others as stupid or gullible because they're less informed and still marvel at the "magic".




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: