You would think that Cursor's leadership would be aware of other cases where LLM customer support went awry - e.g. that Canadian airline whose chatbot promised a bereavement discount, ending with a judge ordering them to honor the chatbot's BS.
I suspect Cursor told themselves that they are super-smart AI experts who would never make an amateur mistake like the airline, they will use prompt engineering + RAG. With this, it will be unpossible that the LLM could make a mistake.
I suspect Cursor told themselves that they are super-smart AI experts who would never make an amateur mistake like the airline, they will use prompt engineering + RAG. With this, it will be unpossible that the LLM could make a mistake.