your post would be more convincing if it didn't entirely rely on the output of an AI model, something well known for being unable to distinguish correctness from sounding correct, and for hallucinating fake information and even fake sources
thus, the logical end of the sentence "So if the AI thinks eminent philosopher's have consensus on DFW's lobster points..." is "then we can make no conclusions solely based on that AI output, because of the aforementioned reasons"
I find myself frequently at an inflection point between the tyranny of time and the need for correctness in communication. The role technology plays and the toll it takes in return is particularly troubling as humans have been overwhelmed by the broad and deep flood of information coming at them in increasingly asymptotic time-scales since the publication of Infinite Jest nearly 3 decades ago, which is roughly where the time scales in Peter Pirolli's research start. See the slides linked below.
That's the phenomenological n-of-1 take. For a more empirical treatement, see Peter Piroli's slide deck here:
The self-reflective point that you make is good, but it hides the fact that what we are missing is a set of meta tools around emergent AI that allow us to build scalable collaborations of human-computer sensemaking teams - the kind Pirolli implies.
At this point, even the ability to do that batch mode overnight meaning-making, sense-making, validation and verification feedback loops would be an improvement from where we are since our accusations of scientism vs. anthropo-robotic models of mental disorders could be rendered mute by agents that could actually resolve complex models of evolutionary epistemology.
I feel like we all want the same thing: Truth. However, some of us are more tolerant of what we have now: prototypes, proxies, and n-of-1, back of the envelope verification, validation, fact-checking, if you will.
Human beings can also not distinguish between fantasy and reality (on average), it is just that we're all used to our own fantasies and can work around them. In this case, the poster did not understand why something was the case and has indicated they are out of their depth: The AI response is meant to serve as a bridge to more useful discussion. It provides a few reference points of view which gives the OP something to respond to, other than just literally every aspect of why they feel something is the case.
It's not meant as an exhaustive source that is correct about everything - we're talking about philosophy, such a source doesn't exist in the first place. Therefore, decrying something on the grounds that it is possibly not correct is the worst kind of response: If that is the bar, nobody may respond.
> Human beings can also not distinguish between fantasy and reality (on average)
this statement is prima facie untrue, and quite ridiculous sounding
it is certainly the case that humans can sometimes not tell without checking, which is it's important to verify the output is correct rather than relying on our own sometimes faulty wrongness detection capabilities
the AI output could have been completely made up, hence why it can't be relied upon for factual claims like you relied upon it for factual claims
> this statement is prima facie untrue, and quite ridiculous sounding
If it sounds ridiculous to you, I invite you to speak to some actual human beings. They believe so many things that are patently ridiculous (and frequently contradict) it's like stepping into an ocean of lies. I'm honestly surprised that this specific part of my statement is that weird to you.
> it's important to verify the output is correct rather than relying on our own sometimes faulty wrongness detection capabilities
This is a step you must take with both AI and people. Given that this is the case, I do not see a difference in value in terms of the reliability of facts. If anything, you could make a case that human imagination is more valuable (because the AI is just a mathematical reproduction of whatever imagination went into it). That said, I can't think of why it would be the case outside of a human superiority angle, which is tedious.
> the AI output could have been completely made up
Hence why I called it a springboard for further discussion and not an encyclopedia of absolute fact.
> If it sounds ridiculous to you, I invite you to speak to some actual human beings
sure, I will speak to you
below are 10 propositions, each either fantasy or reality. I invite you to try to determine which is which, then we can find out whether the comment about the average is true:
thus, the logical end of the sentence "So if the AI thinks eminent philosopher's have consensus on DFW's lobster points..." is "then we can make no conclusions solely based on that AI output, because of the aforementioned reasons"