Hacker News new | past | comments | ask | show | jobs | submit login

Please please please stop saying thought. This has nothing to do with the word thought. When we say the word thought it means something. Please don’t use the same word for whatever AI is doing and trivialize the word. Invent a new word if needed but for Pete’s sake be accurate and truthful.



Okay, what is a thought then?


Something in the mind.

(Didn't make that up. It's one of the definitions of Merriam Webster: https://www.merriam-webster.com/dictionary/thought)


Way to cherry pick the one definition that fits your argument and ignore all the other definitions which kinda fit for what this model is doing.


> Way to cherry pick the one definition that fits your argument and ignore all the other definitions which kinda fit for what this model is doing.

There is no definition on the linked page that isn't self-referential or even more inapplicable in the context of transformer based LLMs?

* "an individual act or product of thinking": This is self-referential.

* "a developed intention or plan": The model doesn't have an intention or plan - it predicts the probability of text based on a user prompt mixed with corporate-inflicted bias.

* "the intellectual product or the organized views and principles of a period, place, group, or individual": GPTs are not periods, places, groups or individuals. I don't doubt that the input to the model is based on thought, but this is not the same as the model thinking. Maybe this is where your confusion originates from?

Bonus: I asked GPT4o "Can you think?" (sorry I won't pay for o1). It was trained to state itself that it can't think:

"I can process information, analyze patterns, and provide responses based on data and logical reasoning. However, I don't "think" in the human sense—I don't have consciousness, emotions, or subjective experiences. My abilities are limited to generating text, solving problems, and assisting with tasks based on the vast amount of information I've been trained on.

In essence, I simulate reasoning, but I don't have awareness or independent thought."

I would add that I strongly doubt that it is capable of even simulating "reasoning" as is claimed by the model "authors" (not sure if we can say they are authors since most of the model isn't their IP). And I can prove that the models up to 4o aren't generally able to solve problems.

The question really is whether a group of people is attempting to anthropomorphize a clever matrix processor to maximize hype and sales. You'll have to answer that one for yourself.


What does self referential have to do with anything? These LLMs have proven they can "talk about themselves".

> an individual act or product of thinking

Emphasis on "product of thinking". Though you'll probably get all upset by the use of the word "thinking". However, people have applied the word "thinking" to computers for decades. When a computer is busy or loading, they might say "it's thinking."

> a developed intention or plan

You could certainly ask this model to write up a plan for something.

> reasoning power

Whether you like it or not, these LLMs do have some limited ability to reason. Far from human level reasoning, and they VERY frequently make mistakes/hallucinations and misunderstand, but these models have proven they can reason about things they weren't specifically trained on. For example, I remember seeing one person made up a new programming language, never existed before, and they were able to discuss it with an LLM.

No, they're not conscious. No, they don't have minds. But we need to rethink what it means for something to be "intelligent", or what it means for something to "reason", that doesn't require a conscious mind.

For the record, I find LLM technology fascinating, but I also see how flawed it is, how over hyped it is, that it is mostly a stochastic parrot, and that currently it's greatest use is as a grand scale bullshit misinformation generator. I use chatgpt sparingly, only when I'm confident it may actually give me an accurate answer. I'm not here to praise chatbots or anything, but I also don't have a blind hatred for the technology, nor do I immediately reject everything labeled as "AI".


> What does self referential have to do with anything?

It means that the definition of "thought" from Webster as "an individual act or product of thinking" is referring to the word being defined (thought -> thinking) and thus is self-referential. I said in my prior response already that if you refer to the input of the model being a "product of thinking", then I agree, but that doesn't give the model an ability to think. It just means that its input has been thought up by humans.

> When a computer is busy or loading, they might say "it's thinking."

Which I hope was never meant to be a serious claim that a computer would really be thinking in those cases.

> You could certainly ask this model to write up a plan for something.

This is not the same thing as planning. Because it's an LLM, if you ask it to write up a plan, it will do its thing and predict the next series of words most probable based on its training corpus. This is not the same as actively planning something with an intention of achieving a goal. It's basically reciting plans that exist in its training set adapted to the prompt, which can look convincing to a certain degree if you are lucky.

> Whether you like it or not, these LLMs do have some limited ability to reason.

While this is an ongoing discussion, there are various papers that make good attempts at proving the opposite. If you think about it, LLMs (before the trick applied in the o1 model) cannot have any reasoning ability since the processing time for each token is constant. Whether adding more internal "reasoning" tokens is going to change anything about this, I am not sure anyone can say for sure at the moment since the model is not open to inspection, but I think there are many pointers suggesting it's rather improbable. The most prominent being the fact that LLMs come with a > 0 chance of the next word predicted being wrong, thus real reasoning is not possible since there is no way to reliably check for errors (hallucination). Did you ever get "I don't know." as a response from an LLM? May that be because it cannot reason and instead just predicts the next word based on probabilities inferred from the training corpus (which for obvious reasons doesn't include what the model doesn't "know" and reasoning would be required to infer the fact that it doesn't know something)?

> I'm not here to praise chatbots or anything, but I also don't have a blind hatred for the technology, nor do I immediately reject everything labeled as "AI".

I hope I didn't come across as having "blind hatred" for anything. I think it's important to understand what transformer based LLMs are actually capable of and what they are not. Anthropomorphizing technology is in my estimation a slippery slope. Calling an LLM a "being", "thinking" or "reasoning" are only some examples of what "sales optimizing" anthropomorphization could look like. This comes not only with the danger of you investing into the wrong thing, but also of making wrong decisions that could have significant consequences for your future career and life in general. Last but not least, it might be detrimental to the development of future useful AI (as in "improving our lives") since it may lead to deciders in politics drawing the wrong conclusions in terms of regulation and so on.


Exactly and now please don’t say AI has a mind …


It's called terminology. Every field has words that mean very different things from the layman's definition. It's nothing to get upset about.


Not upset but saddened and disappointed … this is how snake oil was sold.


No one gets this emotional about astrophysicists calling almost everything 'metal' and this is definitely less bad than that.


It’s way worse than that .. next you know we will be taking about AI’s mind and AI’s soul and how have a soul purer than us … just so they can sell you a few damn chips.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: