Hacker News new | past | comments | ask | show | jobs | submit login

You're missing my point. Words are simply serialized thoughts. When we humans read the words, like you would be doing for this sentence, you are building a model of what those words mean based on your conceptual understanding and experience in space-time. That modeling is how you can then determine if the model formed in your mind using the serialized words in the sentence corresponds to reality or not. For the LLM, there is actually no model of reality whatsoever, its just words, so there is no way the LLM would ever know if the words when modeled would be true or false etc.



An LLM does have a model of reality. An LLM's reality is built on the experiences (words) it's been feed.

Humans are similar. A human's reality is built on the experiences (senses) it's been feed. There definitely are several major differences, the obvious one being that we have a different sensory input than an LLM, but there are others, like human's having a instinctual base model of reality, shaped by the effects of natural selection over our ancestors.

Just like an LLM can't tell if the reality it's been fed actually corresponds to the "truer" outside reality (you could feed an LLM lies like the sky is plaid in such a way that it would report that it's true), a human can't tell if the reality it's been fed actually corresponds to a "truer" outside reality (humans could be feed lies like we are in true reality, when we're actually all NPCs in a video game for a higher level).

The LLM can't tell if it's internal reality matches an outside reality, and humans can't tell if their internal reality matches an outside reality, because both only have the input they've received to go on, and can't tell if it's problematic or it's incomplete.


Words are not reality, they are just data serialized from human world experience, without reference to the underlying meaning of those words. An LLM is unable to build the conceptual space-time model that the words reference, thus it has no understanding whatsoever of the meaning of those words. The evidence for this is everywhere in the "hallucinations" of LLM. It just statistics on words, and that gets you nowhere to understanding the meaning of words, that is conceptual awareness of matter through space-time.


This is a reverse anthropic fallacy. It may be true of a base model (though it probably isn't), but it isn't true of a production LLM system, because the LLM companies have evals and testing systems and such things, so they don't release models that clearly fail to understand things.

You're basically saying that no computer program can work, because if you randomly generate a computer program then most of them don't work.


Not at all. I'm saying there is a difference between statistics about word data and working with space-time data and concepts that classify space-time. We do the latter https://graphmetrix.com/trinpod-server


Insofar as this is a philosophically meaningful assertion, it isn't true. LLMs live in a universe of words, it is true; within that universe, they absolutely have world models, which encode the relationships between concepts encoded by words. It's not "reality", but neither are the conceptual webs stored in human brains. Everything is mediated through senses. There's no qualitative difference between an input stream of abstract symbols, and one of pictures and sounds. Unless you think Helen Keller lacked a concept of true and false?


They don't have world models, they have word models. A very big difference indeed!


Would you say that blind-deaf-paralyzed people do not have world models either, since they can only experience the world through words?


Well, if they have hearing, they can build a world model based on that sensation. So when someone talks about the fall, they can remember the sound of leaves hitting other leaves when they fall. The senses give us measurement data on reality that we use to then model reality. We humans then can create concepts about that experience, and then ultimately communicate with other using common words to communication that conceptual understanding. Word data alone is just word data with no meaning. This is why when I look at a paragraph in Russian, it has no meaning for me. (As I don't understand Russian)




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: