> Do you believe that it would be possible, in principle, to build a robot that looked and acted extremely similar to a human being? It could carry on conversations, make decisions, defend itself against antagonists, etc. in a similar manner to a human being? In your view, would such a robot be necessarily a conscious entity?
I don't even know that other humans are conscious entities. At least not with the level of rigor you seem to be demanding I apply to this hypothetical robot. However, if you and I were to agree upon a series of test that, if passed by a human, we would assume for the sake of argument that that human was a conscious entity, and if we then subjected your robot to those same tests and it also passed, then I would also assume the robot was also conscious.
You might have noticed I made a hidden assumption in the tests though, which is that in establishing the consciousness or not-consciousness of a human they do not rely on the observable fact that the subject is a human. Is that reasonable?
Sure, absolutely. I agree that we could construct a battery of tests such that any entity passing should be given the benefit of the doubt and treated as though it were conscious: granted human (or AI) rights, allowed self-determination, etc.
> I don't even know that other humans are conscious entities
Exactly. Note that the claim Retra is making (to which I was responding) was very much stronger than this. He is arguing not just that we should generally treat beings that seem conscious (including other people) as if they are, but that they must by definition be conscious, and in fact that it is a self-contradictory logical impossibility to speak of a hypothetical intelligent-but-not-conscious creature.
I don't even know that other humans are conscious entities. At least not with the level of rigor you seem to be demanding I apply to this hypothetical robot. However, if you and I were to agree upon a series of test that, if passed by a human, we would assume for the sake of argument that that human was a conscious entity, and if we then subjected your robot to those same tests and it also passed, then I would also assume the robot was also conscious.
You might have noticed I made a hidden assumption in the tests though, which is that in establishing the consciousness or not-consciousness of a human they do not rely on the observable fact that the subject is a human. Is that reasonable?