Hacker News new | past | comments | ask | show | jobs | submit login

I had a similar gut reaction, but on reflection I think 4.5's is actually the better response.

On one hand, the response from 4.5 seems pretty useless to me, and I can't imagine a situation in which I would personally find value in it. On the other hand, the prompt it's responding to is also so different from how I actually use the tool that my preferences aren't super relevant. I would never give it a prompt that didn't include a clear question or direction, either explicitly or implicitly from context, but I can imagine that someone who does use it that way would actually be looking for something more in line with the 4.5 response than the 4o one. Someone who wanted the 4o response would likely phrase the prompt in a way that explicitly seeks actionable advice, or if they didn't initially then they would in a follow-up.

Where I really see value in the model being capable of that type of logic isn't in the ChatGPT use case (at least for me personally), but in API integrations. For example, customer service agents being able to handle interactions more delicately is obviously useful for a business.

All that being said, hopefully the model doesn't have too many false positives on when it should provide an "EQ"-focused response. That would get annoying pretty quickly if it kept happening while I was just trying to get information or have it complete some task.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: