If people are actually relying on LLMs for validation of ideas they come up with during mental health episodes, they have to be pretty sick to begin with, in which case, they will find validation anywhere.
If you've spent time with people with schizophrenia, for example, they will have ideas come from all sorts of places, and see all sorts of things as a sign/validation.
One moment it's that person who seemed like they might have been a demon sending a coded message, next it's the way the street lamp creates a funny shaped halo in the rain.
People shouldn't be using LLMs for help with certain issues, but let's face it, those that can't tell it's a bad idea are going to be guided through life in a strange way regardless of an LLM.
It sounds almost impossible to achieve some sort of unity across every LLM service whereby they are considered "safe" to be used by the world's mentally unwell.
> If people are actually relying on LLMs for validation of ideas they come up with during mental health episodes, they have to be pretty sick to begin with, in which case, they will find validation anywhere.
You don't think that a sick person having a sycophant machine in their pocket that agrees with them on everything, separated from material reality and human needs, never gets tired, and is always available to chat isn't an escalation here?
> One moment it's that person who seemed like they might have been a demon sending a coded message, next it's the way the street lamp creates a funny shaped halo in the rain.
Mental illness is progressive. Not all people in psychosis reach this level, especially if they get help. The person I know could be like this if _people_ don't intervene. Chatbots, especially those the validate, delusions can certainly escalate the process.
> People shouldn't be using LLMs for help with certain issues, but let's face it, those that can't tell it's a bad idea are going to be guided through life in a strange way regardless of an LLM.
I find this take very cynical. People with schizophrenia can and do get better with medical attention. To consider their decent determinant is incorrect, even irresponsible if you work on products with this type of reach.
> It sounds almost impossible to achieve some sort of unity across every LLM service whereby they are considered "safe" to be used by the world's mentally unwell.
What’s the point here? ChatGPT can just do whatever with people cuz “sickers gonna sick”.
Perhaps ChatGPT could be maximized for helpfulness and usefulness, not engagement. an the thing is o1 used to be pretty good - but they retired it to push worse models.
If you've spent time with people with schizophrenia, for example, they will have ideas come from all sorts of places, and see all sorts of things as a sign/validation.
One moment it's that person who seemed like they might have been a demon sending a coded message, next it's the way the street lamp creates a funny shaped halo in the rain.
People shouldn't be using LLMs for help with certain issues, but let's face it, those that can't tell it's a bad idea are going to be guided through life in a strange way regardless of an LLM.
It sounds almost impossible to achieve some sort of unity across every LLM service whereby they are considered "safe" to be used by the world's mentally unwell.