Hacker News new | past | comments | ask | show | jobs | submit login

Thanks for asking! Yes, but we're actively addressing them.

We do a few things under the hood to make hallucinations significantly less likely. First, we make sure every single statement made by the LLM has a fact ID associated with it... Then we've fine-tuned "verification" LLMs that review all statements to make sure that assertions being made are backed up by facts, and that the facts are actually aligned with the assertion.

It's still possible for the LLM to hallucinate in this process, but the likelihood is much lower.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: