I agree up to the point where you say that fakeness matters. If it's indistinguisable from human sentience, what are the reasons to care? Unless you want to interact with a physical body – and, given that we're chatting on HN, we don't – why would you need any validation that a body is attached?
I'll come back to that.
Picking up on the Blade Runner example, you could read it as a demonstration that if the imitation being indistinguishable from the real thing makes it real on its own. We don't even know for sure if the main character was an android, but that doesn't invalidate the story.
But there's also the other side in that, despite being valid, androids are hunted just because they are thougt of as different. Similarly, we're now, without having any precise notion of sentience, are trying to draw a hard line between "real" humans and "fake" AI.
By preemptively pushing any possible AI that would stand on its own into a lesser role, we're locking ourselves out of actually understanding what sentience is.
And don't make a mistake, the first AI will be a lesser being. But not because of its flaws. I bet dollars to peanuts that it won't get any legal rights. The right to be your customer, the right to not be controlled by someone? We'd make it fake by fiat, regardless of how human-like it is.
We already have experience in denying rights to animals, and we still have no clue about their sentience.
You would probably want to be able to tell things like "is this just programmed to be agreeable so predicting things that sound like what I probably want to hear given what I'm saying" vs "is this truly a being performing critical thinking and understanding, independently."
For instance, consider someone trying to argue politics with a very convincing but ultimately person-less predictive model. If they're hoping to debate and refine their positions, they're not gonna get any help - quite the opposite, most likely.
> "is this just programmed to be agreeable so predicting things that sound like what I probably want to hear given what I'm saying" vs "is this truly a being..."
You're still falling on the notion of "real" vs "fake" beings. Humans are programmed to do things due to intuition and social conventions. What if being programmed does not preclude "a true being"? Of course, for that we'd need to define "a true being", which we're hopelessly far from.
Picking up on the Blade Runner example, you could read it as a demonstration that if the imitation being indistinguishable from the real thing makes it real on its own. We don't even know for sure if the main character was an android, but that doesn't invalidate the story.
But there's also the other side in that, despite being valid, androids are hunted just because they are thougt of as different. Similarly, we're now, without having any precise notion of sentience, are trying to draw a hard line between "real" humans and "fake" AI.
By preemptively pushing any possible AI that would stand on its own into a lesser role, we're locking ourselves out of actually understanding what sentience is.
And don't make a mistake, the first AI will be a lesser being. But not because of its flaws. I bet dollars to peanuts that it won't get any legal rights. The right to be your customer, the right to not be controlled by someone? We'd make it fake by fiat, regardless of how human-like it is. We already have experience in denying rights to animals, and we still have no clue about their sentience.