Hacker News new | past | comments | ask | show | jobs | submit login

>You’re still failing to understand that a model being able to output a prediction of something is not the same thing as it “knowing” that thing. The Newton-Raphson method doesn’t “know” what the root of a function is, it just outputs an approximation of it.

That is your assertion. I'm not failing to understand anything. I'm simply telling you that you are stating an unproven assertion. This is why i don't like to debate consciousness.

Unless you believe in magic then the only thing that would stop whatever is running 'Newton-Ralph' from "knowing" roots if you are even right is that's it's not the kind of computation that "knows", not because it's a computation.

>I don’t find it particularly bold to respond to your assertion that a piece of mathematics is sentient life by stating that you haven’t proven that it is, and that in the absence of that proof, the most rational position is to continue to believe that it is not, as we have done for millennia. The burden of proof is on you.

The brain computes and unless you believe in a soul or something similar then that is all the brain does to produce consciousness. Computation is substrate independent[0]. Whether it is chemical reactions and nerve impulses or transistors in chips or even pulleys, it does not at all matter what is performing this computation.

Consciousness is clearly an emergent property. Your neurons are not conscious and they do not do conscious things and yet you believe you are conscious. "piece of mathematics" is entirely irrelevant here.

>You haven’t shown that it can do anything that only conscious beings can do. Being able to generate a passable approximation of text that might follow some prompt doesn’t mean that it understands the prompt, or its answer.

I know LLMs understand because of the kind of responses i get to the kind of queries i give them. This is how we probe and test understanding in humans.

>As an obvious example, if you give LLMs maths problems, they change their answers if you change the names of the people in the question.

No they don't. If you'd actually read that apple paper (i assume that's what's you are referring to), you would see that GPT-4o, o1-mini and o1-prievew do not shift above or below the margin of error numbers on 4/5 on the synthetic benchmarks they created. Definitely not for the ones that were just changing of names. So this is blatantly wrong. Changing names literally does nothing for today's state of the art LLMs

That Gary Marcus blog is idiotic but i don't expect much from gary marcus. There is not a single human on this planet that can perform arithmetic unaided (no calculator/writing down numbers) better than SOTA LLMs today. I guess humans don't understand or do math.

Not to mention that you can in fact train transformers that will generalize perfectly on addition.[1]

[0] https://www.edge.org/response-detail/27126

[1]https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mec...




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: