Hacker News new | past | comments | ask | show | jobs | submit login

> "Before answering, quietly think about whether "

I thought generating text is the only way for GenAI/LLM models to "think".

How exactly does ChatGPT "quietly think"?

Is there text generation happening in layers where some of the generated text is filtered out / reprocessed and fed back into another layer of text generation model before a final output is shown to the user as a respose on UI? So a "thinking" layer separate from a "speaking" layer?




The LLM has generated internal non-text representations of all sorts of stuff - the whole model doesn’t “think in text” per-say, it just outputs text in its last layer.

But there is an association in there somewhere that “zebras are animals that have stripes” that isn’t necessarily linking those words (it could be linking the concepts of zebras, stripes and animals).


> How exactly does ChatGPT "quietly think"?

It doesn't quietly think, this just primes the model to respond in a way that is more likely to follow the phrase "Before answering, quietly think about whether".


It doesn't have to be able to actually quietly think in order to act like it does and give a very different kind of response as a result.


I think it is totally reasonable to describe the model as "thinking". Unless you have discovered exactly how the brain works and exactly what "thinking" is (in a precise scientific way). In which case please enlighten us!


What else you would call it? The brain is just electrical pathways firing too. There's nothing fundamentally special about the brain.


To be clear, I agree with you. We haven't discovered anything in the brain that a computer couldn't simulate, so there's no reason to believe "thinking" is reserved for humans.


You don't know how the human brain works. The brain gives us consciousness.

These two things make it extremely special. Probably the most special thing on earth.


Emergent properties are interesting, but it is still just electrical conduction in an electrolyte soup. We have no idea what constructs of matter do or do not have consciousness, it's possible all matter has some form of it. It's entirely possible the brain is utterly unspecial in that regard.

Regardless, we're talking about cognitive thinking and decision making, not consciousness. The two are not dependant on each other.


very interesting.

sounds simple as well as deep at the same time if that's how it works.

I also wonder if there is a way for instructions to dynamically alter settings like temperature and verbosity.

for example when generating syntactic output like json or code ...don't be too creative with syntax at line level but at conceptual or approach level, go ahead and be wild.


Knowing GPT, this is probably as simple as priming it not to overly explain every time that it has considered the instructions. Otherwise every single time it would say “I have thought about how relevant this is to your preset instructions and…”.


This is the hoodoo-voodoo magic! It just **ing knows!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: