Hacker News new | past | comments | ask | show | jobs | submit login

I disagree. If you were to take a snapshot of someone's knowledge and memory such that you could restore to it over and over, that person would give the same answer to the question. The same is not true for an LLM.

Heck, I can't even get LLMs to be consistent about *their own capabilities*.

Bias disclaimer: I work at Google, but not on Gemini. If I ask Gemini to produce an SVG file, it will sometimes do so and sometimes say "sorry, I can't, I can only produce raster images". I cannot deterministically produce either behavior - it truly seems to vary randomly.




You'd need to restore more than memory/knowledge. You'd need to restore the full human, and in the exact same condition (inside and out).

Ask me some question before bed and again after waking up, I'll probably answer it at night but in the morning tell you to sod off until I had coffee.


Of course it varies randomly, that's literally what temperature is for in generation.


You could run an llm deterministically too.

We're often explicitly adding in randomness to the results so it feels weird to then accuse them of not being intelligent after we deliberately force them off the path.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: