Hacker News new | past | comments | ask | show | jobs | submit login

> They are not intelligent.

Citation needed. Numerous actual citations have demonstrated hallmarks of intelligence for years. Tool use. Comprehension and generalization of grammars. World modeling with spatial reasoning through language. Many of these are readily testable in GPT. Many people have… and I dare say that LLMs reading comprehension, problem solving and reasoning skills do surpass that of many actual humans.

> They model intelligent behavior

It is not at all clear that modeling intelligent behavior is any different from intelligence. This is an open question. If you have an insight there I would love to read it.

> They don't know or care what language is: they learn whatever patterns are present in text, language or not.

This is identical to how children learn language prior to schooling. They listen and form connections based on the cooccurrence of words. They’re brains are working overtime to predict what sounds follow next. Before anyone says “not from text!” please don’t forget people who can’t see or hear. Before anyone says, “not only from language!” multimodal LLMs are here now too!

I’m not saying they’re perfect or even possess the same type of intelligence. Obviously the mechanisms are different. However far too many people in this debate are either unaware of their capabilities or hold on too strongly to human exceptionalism.

> There is this religious cult surrounding LLMs that bases all of its expectations of what an LLM can become on a personification of the LLM.

Anthropomorphizing LLMs is indeed an issue but is separate from a debate on their intelligence. I would argue there’s a very different religious cult very vocally proclaiming “that’s not really intelligence!” as these models sprint past goal posts.




> hallmarks of intelligence

All through the lens of personification. It's important to take a step back and ask, "Where do these hallmarks come from?"

The hallmarks of intelligence are literally what is encoded into text. The reason LLMs are so impressive is that they manage to follow those patterns without any explicit direction.

> I dare say that LLMs reading comprehension, problem solving and reasoning skills do surpass that of many actual humans.

People tend to over-optimize reading comprehension by replacing what they are reading with what they predict to be reading. Every person has a worldview built out of prior knowledge that they use to disambiguate language. It takes effort to suspend one's worldview, and it takes effort to write accurate unambiguous language.

An LLM cannot have that problem, because an LLM cannot read. An LLM models text. The most dominant patterns of text are language: either the model aligns with those patterns, or we humans call the result a failure and redirect our efforts.

> Anthropomorphizing LLMs is indeed an issue but is separate from a debate on their intelligence.

How could that even be possible? The very word, "intelligence" is an anthropomorphization. Ignoring that reality moves the argument into pointless territory. If you try to argue that an anthropomorphized LLM is intelligent, then the answer is, "No shit, Sherlock. People are intelligent!" That doesn't answer any questions about a real LLM.

> as these models sprint past goal posts.

Either an LLM succeeds at a goal, or it fails. It has no idea what the difference is. The LLM has no concept of success: no category for failure. An LLM has no goals or intentions, and doesn't make a single logical decision.

So what is its success coming from? The text being modeled. Without humans authoring that text, there is no model at all!

The goals are authored, too. Every subject, every decision, every behavior, and every goal is determined by a human. Without human interaction, the LLM is nothing. Does nothing think? Does an arrow find its target? Of course not.


Citation needed for you!


Sure. A few below but far from exhaustive:

- https://arxiv.org/abs/1909.07528 - https://arxiv.org/abs/2212.10403 - https://arxiv.org/abs/2201.11903 - https://arxiv.org/abs/2210.13382

There are also literally hundreds of articles and tweet threads about it. Moreover, as I said, you can test many of my claims above directly using readily available LLMs.

GP has a much harder defense. They have to prove that despite all of these capabilities that LLMs are not intelligent. That the mechanisms by which humans possess intelligence is fundamentally distinct from a computer’s ability to exhibit the same behaviors so much that it invalidates any claim that LLMs exhibit intelligence.

Intelligence: “the ability to acquire and apply knowledge and skills”. It is difficult to argue that modern LLMs cannot do this. At best we can quibble about the meaning of individual words like “acquire”, “apply”, “knowledge”, and “skills”. That’s a significant goal post shift from even a year ago.


Thanks for the links, but yeah - I would not give much credit to tweets and blog posts. Often this "emergent" behavior is not that. These are not experts.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: