Regards claim 2, Searle repeats the phrase "specific causal properties of the brain" quite a few times without spelling out just what he's referring to, but from other remarks he makes it seems clear he means actual electrochemical interactions, rather than generic information processing capabilities. I think his view is that consciousness (most likely) doesn't arise out of "information processing", which he would probably class as "observer-relative", but out of some as yet not understood chemistry/physics which takes place in actual physical brains.
So the question, to Searle, is not "about what kinds of information processing systems would have subjective conscious experiences", but "what kinds of electrochemical interactions would cause conscious experiences".
The intuition/assumption of his questioners seems to be that whatever electrochemical interactions are relevant for consciousness, they are relevant only in virtue of their being a physical implementation of some computational features, but plainly he does not share this assumption and favours the possibility that the electrochemical interactions are relevant because they physically (I think he'd have to say) produce subjective experience - and that any computational features we attribute to them are most likely orthogonal to this. Hence his example of the uselessness of feeding an actual physical pizza to a computer simulation of digestion. His point is that the biochemistry (he assumes) required for consciousness isn't present in a computer any more than that required for digestion is.
Another example might be: you wouldn't expect a compass needle to be affected by a computer simulating electron spin in an assemblage of atoms exhibiting ferromagnetism any more than it would be by a simulation of a non-ferromagnetic assemblage.
To someone making the assumption that computation is fundamental for explanations of consciousness, these examples seem to entirely miss the point, because it's not the physical properties of the implementation (the actual goings on in the CPU and whatnot) that matter, but the information processing features of the model that are the relevant causal properties (for them.)
But to Searle, I think, these people are just failing to grok his position, because they don't seem to even understand that he's saying the physical goings on are primary. You can almost hear the mental "WHOOSH!" as he sees his argument pass over their heads. In an observer-relative way, of course.
As you imply, until someone can show at least a working theory of how either information processing or biochemistry can cause subjective experience the jury will be out and the arguments can continue. I won't be surprised if it takes a long time.
(Edited to add the magnetic example and subsequent 2 paragraphs.)
So the question, to Searle, is not "about what kinds of information processing systems would have subjective conscious experiences", but "what kinds of electrochemical interactions would cause conscious experiences".
The intuition/assumption of his questioners seems to be that whatever electrochemical interactions are relevant for consciousness, they are relevant only in virtue of their being a physical implementation of some computational features, but plainly he does not share this assumption and favours the possibility that the electrochemical interactions are relevant because they physically (I think he'd have to say) produce subjective experience - and that any computational features we attribute to them are most likely orthogonal to this. Hence his example of the uselessness of feeding an actual physical pizza to a computer simulation of digestion. His point is that the biochemistry (he assumes) required for consciousness isn't present in a computer any more than that required for digestion is.
Another example might be: you wouldn't expect a compass needle to be affected by a computer simulating electron spin in an assemblage of atoms exhibiting ferromagnetism any more than it would be by a simulation of a non-ferromagnetic assemblage.
To someone making the assumption that computation is fundamental for explanations of consciousness, these examples seem to entirely miss the point, because it's not the physical properties of the implementation (the actual goings on in the CPU and whatnot) that matter, but the information processing features of the model that are the relevant causal properties (for them.)
But to Searle, I think, these people are just failing to grok his position, because they don't seem to even understand that he's saying the physical goings on are primary. You can almost hear the mental "WHOOSH!" as he sees his argument pass over their heads. In an observer-relative way, of course.
As you imply, until someone can show at least a working theory of how either information processing or biochemistry can cause subjective experience the jury will be out and the arguments can continue. I won't be surprised if it takes a long time.
(Edited to add the magnetic example and subsequent 2 paragraphs.)