Hacker News new | past | comments | ask | show | jobs | submit login

In theory that sounds great, but most LLM providers are trying to produce useful models that ultimately will be widely used and make them money.

A model that is more correct but swears and insults the user won't sell. Likewise a model that gives criminal advice is likely to open the company up to lawsuits in certain countries.

A raw LLM might perform better on a benchmark but it will not sell well.






Disgusted by ChatGPT's flattery and willingness to go along with my half-baked nonsense, I created an anti-ChatGPT, which is unfriendly and pushes back on nonsense as hard as possible.

All my friends hate it, except one guy. I used it for a few days, but it was exhausting.

I figured out the actual use cases I was using it for, and created specialized personas that work better for each one. (Project planning, debugging mental models, etc.)

I now mostly use a "softer" persona that's prompted to point out cognitive distortions. At some point I realized, I've built a therapist. Hahaha.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: