You might be conflating the epistemological point with Turing's test, et cetera. I could not agree more that indistinguishability is a key metric. These days, it is quite possible (at least for me) to distinguish LLM outputs from those of a thinking human, but in the future that could change. Whether LLMs "think" is not an interesting question because these are algorithms, people. Algorithms do not think.