Conceptual reasoning machines rely on concrete, explicit and intelligble concepts and rules. People like this because it 'looks' like reasoning on the inside.
However, our brains, like language models, rely on implicit, distributed representations of concepts and rules.
So the intelligble representations of conceptual reasoning machines are maybe too strong a requirement for 'reasoning' unless you want to exclude humans too.
It’s also possible that you do not have information on our technology which models conceptual awareness of matter and change through space-time which is different than any previous attempts?
That's called talking your book. Please spell out how a document indexing and retrieval system is more akin to a "conceptual reasoning machine" compared to o3?
If LLMs can't reason, then this cannot either - whatever this is supposed to be. Not a good argument. Also, since you're apparently working on that product: 'It is difficult to get a man to understand something when his salary depends on his not understanding it.'