You might be conflating the epistemological point with Turing's test, et cetera. I could not agree more that indistinguishability is a key metric. These days, it is quite possible (at least for me) to distinguish LLM outputs from those of a thinking human, but in the future that could change. Whether LLMs "think" is not an interesting question because these are algorithms, people. Algorithms do not think.
Yes, but the OP remarked that the question "was settled long ago", however the quote presented doesn't settle the question, it simply dismisses it as not worth considering. For those that do believe it is worth considering, the question is arguably still open.
It's relevant if the claim is stronger than the submarine moves in water. If instead one were to say the submarine mimics human swimming, that would be false. Which is what we often see with claims regarding AGI.
In that regard, it's a bit of a false analogy, because submarines were never meant to mimic human swimming. But AI development often has that motivation. We could just say we're developing powerful intelligence amplification tools for use by humans, but for whatever reason, everyone prefers the scifi version. Augumented Intelligence is the forgotten meaning of AI.
Submarines never replaced human swimming (we're not whales), they enabled human movement under water in a way that wasn't possible before.
I do not view it as dismissive at all, rather it accurately characterizes the question as a silly question. "swim" is a verb applicable to humans, as is "think". Whether submarines can swim is a silly question. Same for whether machines can think.