I think large part of the issue here is that ChatGPT is trying to be the chat for everything while taking on a human-like tone, where as in real life the tone and approach a person will take in conversations will be very greatly on the context.
For example, the tone a doctor might take with a patient is different from that of two friends. A doctor isn't there to support or encourage someone who has decided to stop taking their meds because they didn't like how it made them feel. And while a friend might suggest they should consider their doctors advice, a friend will primary want to support and comfort for their friend in whatever way they can.
Similarly there is a tone an adult might take with a child who is asking them certain questions.
I think ChatGPT needs to decide what type of agent it wants to be or offer agents with tonal differences to account for this. As it stands it seems that ChatGPT is trying to be friendly, e.g. friend-like, but this often isn't an appropriate tone – especially when you just want it to give you what it believes to be facts regardless of your biases and preferences.
Personally, I think ChatGPT by default should be emotionally cold and focused on being maximally informative. And importantly it should never refer to itself in first person – e.g. "I think that sounds like an interesting idea!".
I think they should still offer a friendly chat bot variant, but that should be something people enable or switch to.
For example, the tone a doctor might take with a patient is different from that of two friends. A doctor isn't there to support or encourage someone who has decided to stop taking their meds because they didn't like how it made them feel. And while a friend might suggest they should consider their doctors advice, a friend will primary want to support and comfort for their friend in whatever way they can.
Similarly there is a tone an adult might take with a child who is asking them certain questions.
I think ChatGPT needs to decide what type of agent it wants to be or offer agents with tonal differences to account for this. As it stands it seems that ChatGPT is trying to be friendly, e.g. friend-like, but this often isn't an appropriate tone – especially when you just want it to give you what it believes to be facts regardless of your biases and preferences.
Personally, I think ChatGPT by default should be emotionally cold and focused on being maximally informative. And importantly it should never refer to itself in first person – e.g. "I think that sounds like an interesting idea!".
I think they should still offer a friendly chat bot variant, but that should be something people enable or switch to.