I'm not missing your point. I just don't think you're making one.
You keep saying the same nonsense over and over again. A LLM does not know things so... What kind of argument is that ? You're working backwards from a conclusion that is nothing but your own erroneous convictions on what a "statistical model" is and are undertaking a whole lot of mental gymnastics to stay there.
There are a lot of papers there that all try to approach this in different ways. You should read them and try to make an honest argument and that doesn't involve "This doesn't count because - claim that is in no way empirically or theoretically validated."
You are the one claiming that LLMs are conscious, so it falls to you to prove it.
I argued that LLMs do not have the capacity to have ideas or to know things, and you tried to prove me wrong by providing examples of papers that show, for example, that LLMs have internal states that can be used to predict the likelihood that what they will output will be facts. But that doesn’t disprove what I said, because that’s not what it means to have ideas or know things. By definition, only conscious beings can do those things.
>You are the one claiming that LLMs are conscious, so it falls to you to prove it.
If a machine is doing things previously before ascribed to "conscious beings" then it's on you to tell me why the machine is not conscious. Hopefully something other than the circular - "It cannot be conscious so it is not conscious".
But whatever. I hadn't quite realized this had devolved into a debate on consciousness. I think that's on me but I have no interest in a back and forth on such an ill-defined, ill-understood concept.
You don't know what consciousness is, what is required of it or what makes it tick in you, you have no way of proving one way or another anybody else has it. It's extremely silly then don't you think to make such bold declarations on what doesn't have it ? especially with circular arguments.
What difference does it make if you won't call it conscious if it does anything a conscious being does ? That's just semantics.
You’re still failing to understand that a model being able to output a prediction of something is not the same thing as it “knowing” that thing. The Newton-Raphson method doesn’t “know” what the root of a function is, it just outputs an approximation of it.
> It’s extremely silly then don’t you think to make such bold declarations on what doesn’t have it?
I don’t find it particularly bold to respond to your assertion that a piece of mathematics is sentient life by stating that you haven’t proven that it is, and that in the absence of that proof, the most rational position is to continue to believe that it is not, as we have done for millennia. The burden of proof is on you.
> if it does anything a conscious being does
You haven’t shown that it can do anything that only conscious beings can do.
Being able to generate a passable approximation of text that might follow some prompt doesn’t mean that it understands the prompt, or its answer. As an obvious example, if you give LLMs maths problems, they change their answers if you change the names of the people in the question. They’re not actually doing maths.
> Notice anything? It’s not just that the performance on MathGLM steadily declines as the problems gets bigger, with the discrepancy between it and a calculator steadily increasing, it’s that the LLM based system is generalizing by similarity, doing better on cases that are in or near the training set, never, ever getting to a complete, abstract, reliable representation of what multiplication is.[0]
>You’re still failing to understand that a model being able to output a prediction of something is not the same thing as it “knowing” that thing. The Newton-Raphson method doesn’t “know” what the root of a function is, it just outputs an approximation of it.
That is your assertion. I'm not failing to understand anything. I'm simply telling you that you are stating an unproven assertion. This is why i don't like to debate consciousness.
Unless you believe in magic then the only thing that would stop whatever is running 'Newton-Ralph' from "knowing" roots if you are even right is that's it's not the kind of computation that "knows", not because it's a computation.
>I don’t find it particularly bold to respond to your assertion that a piece of mathematics is sentient life by stating that you haven’t proven that it is, and that in the absence of that proof, the most rational position is to continue to believe that it is not, as we have done for millennia. The burden of proof is on you.
The brain computes and unless you believe in a soul or something similar then that is all the brain does to produce consciousness. Computation is substrate independent[0]. Whether it is chemical reactions and nerve impulses or transistors in chips or even pulleys, it does not at all matter what is performing this computation.
Consciousness is clearly an emergent property. Your neurons are not conscious and they do not do conscious things and yet you believe you are conscious.
"piece of mathematics" is entirely irrelevant here.
>You haven’t shown that it can do anything that only conscious beings can do. Being able to generate a passable approximation of text that might follow some prompt doesn’t mean that it understands the prompt, or its answer.
I know LLMs understand because of the kind of responses i get to the kind of queries i give them. This is how we probe and test understanding in humans.
>As an obvious example, if you give LLMs maths problems, they change their answers if you change the names of the people in the question.
No they don't. If you'd actually read that apple paper (i assume that's what's you are referring to), you would see that GPT-4o, o1-mini and o1-prievew do not shift above or below the margin of error numbers on 4/5 on the synthetic benchmarks they created. Definitely not for the ones that were just changing of names. So this is blatantly wrong. Changing names literally does nothing for today's state of the art LLMs
That Gary Marcus blog is idiotic but i don't expect much from gary marcus. There is not a single human on this planet that can perform arithmetic unaided (no calculator/writing down numbers) better than SOTA LLMs today. I guess humans don't understand or do math.
Not to mention that you can in fact train transformers that will generalize perfectly on addition.[1]
You keep saying the same nonsense over and over again. A LLM does not know things so... What kind of argument is that ? You're working backwards from a conclusion that is nothing but your own erroneous convictions on what a "statistical model" is and are undertaking a whole lot of mental gymnastics to stay there.
There are a lot of papers there that all try to approach this in different ways. You should read them and try to make an honest argument and that doesn't involve "This doesn't count because - claim that is in no way empirically or theoretically validated."