>I thought we were talking about behaviors that impact survival and are acted on by natural selection, not minute differences in MRI scans.
I was talking about the stupidity of p-zombies. Either way, those 'minute' differences in MRI scans build up in such a way to determine the survival of the mind being scanned.
>Do you believe [...] such a robot be necessarily a conscious entity?
Yes, it would. Because in order to cause such behavior to be physically manifest, you must actually construct a machine of sufficient complexity to mimic the behavior of a human brain exactly. It must consume and process information in the same manner. And that's what consciousness is: the ability to process information in a particular manner.
Even a "sleepwalking zombie" must undergo the same processing. That processing is the only thing necessary for consciousness, and it doesn't matter what hardware you run it on. As in Searle's problem: even if you run your intelligence on a massive lookup table, it is still intelligence. Because you've defined the behavior to exactly match a target, without imposing realistic constraints on the machinery.
> Yes, it would. [...] that's what consciousness is: the ability to process information in a particular manner.
Then this is our fundamental disagreement. You believe consciousness is purely a question of information processing, and you're throwing your lot in with Skinner and the behaviorists.
I believe that you're neglecting the "the experience of what it's like to be a human being"[0] (or maybe you yourself are a p-zombie ;) and you don't feel that it's like anything to be you). There are many scientists who agree with you, and think that consciousness is an illusion or a red herring because we haven't been able to define it or figure out how to measure it, but that's different than sidestepping the question entirely by defining down consciousness until it's something we can measure (e.g. information processing). I posted this elsewhere, but I highly recommend reading Chalmers' essay "Facing Up to the Problem of Consciousness"[1] if you want to understand why many people consider this one of the most difficult and fundamental questions for humanity to attempt to answer.
>You believe consciousness is purely a question of information processing, and you're throwing your lot in with Skinner and the behaviorists.
No, that is not at all what is happening. That's not even on the same level of discourse.
>I believe that you're neglecting the "the experience of what it's like to be a human being"
That experience is the information processing. They are the same thing, just different words. Like "the thing you turn to open a door" and "doorknob" are the same thing. I'm not neglecting the experience of being human by talking about information processing. What is human is encoded by information that you experience by processing it.
>There are many scientists who agree with you, and think that consciousness is an illusion or a red herring because we haven't been able to define it or figure out how to measure it [...]
No, this is not agreement with me. This is not at all what I'm saying.
In that case, I'm really struggling to understand your position.
> What is human is encoded by information that you experience by processing it.
So you're saying that it's impossible to process information without experiencing it? That the act of processing and the act of experiencing are one and the same? Do you think that computers are conscious? What about a single neuron that integrates and respond to neural signals? What about a person taking Ambien who walks, talks and responds to questions in their sleep (literally while "unconscious")?
>So you're saying that it's impossible to process information without experiencing it? That the act of processing and the act of experiencing are one and the same?
Yes, exactly.
>Do you think that computers are conscious? What about a single neuron that integrates and respond to neural signals?
This is a different question. No, computers aren't conscious. You need to have the 'right kind' of information processing for consciousness, and it's not clear what kind of processing that is.
This is essentially the Sorites Paradox: how many grains of sand are required for a collection to be called a heap? How much information has to be processed? How many neurons are needed? What are the essential features of information processing that must be present before you have consciousness?
These are the interesting questions. So far, we know that there must be continual self-feedback (self-awareness), enough abstract flexibility to recover from arbitrary information errors (identity persistence), a process of modeling counterfactuals and evaluating them (morality), a mechanism for adjusting to new information (learning), a mechanism for combining old information in new ways (creativity), and other kinds of heuristics like emotion, goal-creating, social awareness, and flexible models of communication.
You don't need all of this, of course. You can have it in part or in full, to varying levels of impact. "Consicousness" is not well-defined in this way; it is a spectrum of related information processing capabilities. So maybe you could consider computers to be conscious. They are "conscious in a very loose approximation."
I was talking about the stupidity of p-zombies. Either way, those 'minute' differences in MRI scans build up in such a way to determine the survival of the mind being scanned.
>Do you believe [...] such a robot be necessarily a conscious entity?
Yes, it would. Because in order to cause such behavior to be physically manifest, you must actually construct a machine of sufficient complexity to mimic the behavior of a human brain exactly. It must consume and process information in the same manner. And that's what consciousness is: the ability to process information in a particular manner.
Even a "sleepwalking zombie" must undergo the same processing. That processing is the only thing necessary for consciousness, and it doesn't matter what hardware you run it on. As in Searle's problem: even if you run your intelligence on a massive lookup table, it is still intelligence. Because you've defined the behavior to exactly match a target, without imposing realistic constraints on the machinery.