Hacker News new | past | comments | ask | show | jobs | submit login

Many sources of information contain inaccuracies, either known at the time of publication or learned afterward.

Education involves doing some fact checking and critical thinking. Regardless of the strength of the original source.

It seems like using LLMs in any serious way will require a variety of techniques to mitigate their new, unique reasons for being unreliable.

Perhaps a “chain of model provenance” becomes an important one of these.




If you already know that your model contains falsehoods, what is gained by having a chain of provenance? It can't possibly make you trust it more.


People contain a shitload of falsehoods, including you, yet you assign varying amounts of trust to those individuals.

A chain of providence isn't much different then that person having a diploma, a company work badge, and state issued ID. You at least know they aren't some random off the street.


That only provides value if you have knowledge of how the diploma-issuing organisation is working.

If not, it's just a diploma from some random organisation off the street.

So, wake me up when you know how OpenAI and Google are cooking their models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: