Hacker News new | past | comments | ask | show | jobs | submit login

It's probably pretty intentional. A huge number of people use ChatGPT as an enabler, friend, or therapist. Even when GPT-3 had just come around, people were already "proving others wrong" on the internet, quoting how GPT-3 agreed with them. I think there is a ton of appeal, "friendship", "empathy" and illusion of emotion created through LLMs flattering their customers. Many would stop paying if it wasn't the case.

It's kind of like those romance scams online, where the scammer always love-bombs their victims, and then they spend tens of thousands of dollars on the scammer - it works more than you would expect. Considering that, you don't need much intelligence in an LLM to extract money from users. I worry that emotional manipulation might become a form of enshittification in LLMs eventually, when they run out of steam and need to "growth hack". I mean, many tech companies already have no problem with a bit of emotional blackmail when it comes to money ("Unsubscribing? We will be heartbroken!", "We thought this was meant to be", "your friends will miss you", "we are working so hard to make this product work for you", etc.), or some psychological steering ("we respect your privacy" while showing consent to collect personally identifiable data and broadcast it to 500+ ad companies).

If you're a paying ChatGPT user, try the Monday GPT. It's a bit extreme, but it's an example of how inverting the personality and making ChatGPT mock the user as much as it fawns over them normally would probably make you want to unsubscribe.






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: