> But I guess you could say it acts more like the person that makes stuff up at work if they don’t know, instead of saying they don’t know.
I have had language models tell me it doesn't know. Usually when using a RAG-based system like Perplexity, but they can say they don't know when prompted properly.
I have had language models tell me it doesn't know. Usually when using a RAG-based system like Perplexity, but they can say they don't know when prompted properly.