Hacker News new | past | comments | ask | show | jobs | submit login

> LLMs can pass novel theory of mind tests, which is what we're talking about.

Passing a ToM test is not what OP meant by having an "underlying theory of mind." OP's talking about the machine having an underlying mind (ie sentience, sapience, consciousness, etc), ToM tests are only testing output.

> You said "Those tasks could be completed a [sic] traditional static program.", and no, they can't. You're incorrect.

They can, a static program as I described would indeed answer that one question correctly, resulting in a positive ToM score, without seeing any training data whatsoever. Did the programmer see it? Maybe, but the machine didn't and it would pass the test regardless.




If you have put the answer into the program then by definition you had the test available to you when you finalised the program, which means it is definitionally not a novel test.


The test is novel to the program, just not its programmer. So are we testing the program or are we actually testing its programmer? If we're testing the program, then the programmer's foreknowledge is irrelevant.


>The test is novel to the program

That's funny, I thought you said the test's answer was embedded into the program, making it definitionally not novel to the program.

Anyway, this is boring. You've had five or more opportunities to understand what the word "novel" means in an ML testing context and are choosing wilful obtuseness instead.


> in an ML testing context

OP was not speaking in the ML testing context, hence the misunderstanding.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: