And remember, with increasing accuracy, the cost of validation goes up (not even linear).
We expect computers to be right. Its a trust problem. Average users will simply trust the results of LLMs and move on without proper validation. And the way the LLMs are trained to mimic human interaction is not helping either. This will reduce overall quality in society.
Its a different thing to work with another human, because there is intention. A human wants to be correct or to mislead me. I am considering this without even thinking about it.
And I don't expect expert models to improve things, unless the problem space is really simple (like checking eggs for anomalies).
We expect computers to be right. Its a trust problem. Average users will simply trust the results of LLMs and move on without proper validation. And the way the LLMs are trained to mimic human interaction is not helping either. This will reduce overall quality in society.
Its a different thing to work with another human, because there is intention. A human wants to be correct or to mislead me. I am considering this without even thinking about it.
And I don't expect expert models to improve things, unless the problem space is really simple (like checking eggs for anomalies).