Hacker News new | past | comments | ask | show | jobs | submit login

I suspect everyone will call it a stochastic parrot because it did this one thing not right. And this will continue into the far far future even when it becomes sentient we will completely miss it.



It's more than that but less than intelligence.

Its generalization capabilities are a bit on the low side, and memory is relatively bad. But it is much more than just a parrot now, it can handle some of basic logic, but not follow given patterns correctly for novel problems.

I'd liken it to something like a bird, extremely good at specialized tasks but failing a lot of common ones unless repeatedly shown the solution. It's not a corvid or a parrot yet. Fails rather badly at detour tests.

It might be sentient already though. Someone needs to run a test if it can discern itself and another instance of itself in its own work.


> It might be sentient already though. Someone needs to run a test if it can discern itself and another instance of itself in its own work.

It doesn't have any memory, how could it tell itself from a clone of itself?


People already share viral clips of AI recognising other AI, but I've not seen real scientific study of if this is due to a literary form of passing a mirror test, or if it's related to the way most models openly tell everyone they talk to that they're an AI.

As for "how", note that memory isn't one single thing even in humans: https://en.wikipedia.org/wiki/Memory

I don't want to say any of these are exactly equivalent to any given aspect of human memory, but I would suggest that LLMs behave kinda like they have:

(1) Sensory memory in the form of a context window — and in this sense are wildly superhuman because for a human that's about one second, whereas an AI's context window is about as much text as a human goes through in a week (actually less because we don't only read, other sensory modalities do matter; but for scale: equivalent to what you read in a week)

(2) Short term memory in the form of attention heads — and in this sense are wildly superhuman, because humans pay attention to only 4–5 items whereas DeepSeek v3 defaults to 128.

(3) The training and fine-tuning process itself that allows these models to learn how to communicate with us. Not sure what that would count as. Learned skill? Operant conditioning? Long term memory? It can clearly pick up different writing styles, because it can be made to controllably output in different styles — but that's an "in principle" answer. None of Claude 3.7, o4-mini, DeepSeek r1, could actually identify the authorship of a (n=1) test passage I asked 4o to generate for me.


Similarity match. For that you need to understand reflexively how you think and write.

It's a fun test to give a person something they have written but do not remember. Most people can still spot it.

It's easier with images though. Especially a mirror. For DallE, the test would be if it can discern its own work from human generated image. Especially of you give it an imaginative task like drawing a representation of itself.


It doesn't have any memory _you're aware of_. A semiconductor can hold state, so it has memory.


An LLM is arguably more "intelligent" then people with an IQ of less than 80.

If we call people with an IQ of less than 80 an intelligent life form, why can't we call an LLM that?


Once it pushed out most humans from white collar labor so the remaining humans work in blue collar jobs people wont say its just a stochastic parrot.


Maybe, maybe not. Power loom pushed a lot of humans out of the textile factory jobs, yet noone claims power loom is the AGI.


Not a lot, I mean basically everyone, to the point where most companies doesn't need to pay humans to think anymore.


Well, I'm too lazy to look up how many weavers were displaced back then and that's why I said a lot. Maybe all of them, since they weren't trained to operate the new machines.

Anyway, sorry for a digression, my point is LLM replacing white collar workers doesn't necessarily imply it's generally intelligent -- it may but doesn't have to be.

Although if it gets to a point where companies are running dark office buildings (by analogy with dark factories) -- yes, it's AGI by then.


Or become shocked to realize humans are basically statistical parrots too.


The blue collar jobs are more entertaining anyways, provided you take the monetary inequality away.


Tastes differ.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: