Hacker News new | past | comments | ask | show | jobs | submit login

You would think that Cursor's leadership would be aware of other cases where LLM customer support went awry - e.g. that Canadian airline whose chatbot promised a bereavement discount, ending with a judge ordering them to honor the chatbot's BS.

I suspect Cursor told themselves that they are super-smart AI experts who would never make an amateur mistake like the airline, they will use prompt engineering + RAG. With this, it will be unpossible that the LLM could make a mistake.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: