Hacker News new | past | comments | ask | show | jobs | submit login

The LLM has generated internal non-text representations of all sorts of stuff - the whole model doesn’t “think in text” per-say, it just outputs text in its last layer.

But there is an association in there somewhere that “zebras are animals that have stripes” that isn’t necessarily linking those words (it could be linking the concepts of zebras, stripes and animals).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: