Hacker News new | past | comments | ask | show | jobs | submit login

Agree with most of your points, but a LargeLM, or a SmallLM for that matter, to construct a simple SQL query and put it in a database, they get it right many times already. GPT gets it right most of the time.

Then as a verification step, you ask one more model, not the same one, "what information got inserted the last hour in the database?" Chances of one model to hallucinate and say it put the information in the database, and the other model to hallucinate again with the correct information, are pretty slim.

[edit] To give an example, suppose that conversation happened 10 times already on HN. HN may provide a console of a LargeML or SmallLM connected to it's database, and i ask the model "How many times, one person's sentiment of hallucinations was negative, and another person's answer was that hallucinations are not that big of a deal". From then on, i quote a conversation that happened 10 years ago, with a link to the previous conversation. That would enable more efficient communication.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: