A materialist rejects "consciousness" as a fruitful term, observing that it often leads to confusion and non-sequiturs, as contradictions often do. The paper itself shows how certain definitions of consciousness would have these outcomes. Materialists would say that you need a functional definition of consciousness: If it walks like a duck, talks like a duck, it's a duck. It's a bit like talking about a "soul". Humanity, for the longest time, thought there was a soul, separate from our mere bodies. They just had a "feeling" that it was real, despite not being able to provide a functional definition of what it is. Consciousness is no different. Coherence of the self is an illusion.
Well, that's precisely the point of the article. According to the author, the United States indeed "walks like a duck and talks like a duck," to the extent that it is homeostatic (feeds itself, defends itself, undertakes coordinated actions such as trade embargoes) and large scale information transfer does occur between its constituent subsystems (humans, say).
Of course there is something intuitively bothersome about that: somehow, we don't believe that there is something it is like to be the United States, but we do believe there is something it is like to be xenophon. That's why the core question isn't absurd at all -- the author is trying to understand what truly differentiates a conscious system.
It doesn't seem like that much of a bullet to bite.
If you accept that many-celled organisms can be conscious, and that eusocial colonies (ants, bees) behave more like intelligent organisms than any one member, it's not much if a stretch to say "hm, a country belongs on that scale".
It's moderately surprising that a consistent system would put the US at eg 25% of a human by consciousness, but it's not the kind of thing that makes you say "oops I made a wrong choice somewhere".
The lense of homeostasis (or autopoiesis more generally) is useful: it does allow you to put things on a meaningful, insight-yielding scale: there are humans, and rocks, and then fire and hurricanes somewhere in between. So what does it take to make something past human in the scale?
It's a huge bullet to byte if you think about the ethical implications downstream. Individualism assumes that the relationship between a nation state and the individuals that make it up is exactly not like this.
FWIW, any theory that tries to say that larger scale relationships exactly match smaller scale ones is surely wrong, but so is any theory that tries to deny holistic aspects at larger scales.
Just like organs do not work like large cells and people aren't large-scale organs, societies aren't large scale people. But there are social elements that bind us into a society. Dogmatic individualism is wrong, and treating society as an individual is also wrong.
Can I just say "Thank you" for pointing out the key difference between looking at things that look similar at different space scales but remembering that no, actually they are completely different. That dust storm from space may look like the beach at my feet, but they are two very different systems that move differently, and arise for different reasons. It's cool that they look the same, and no doubt that's meaningful for some reason, but it doesn't mean they are the same.
In any event the US acts like a person because it is basically the President plus Congress plus the leaders of the top 100 public and private American institutions. Any group that small is going to act coherently.
I'm not ready to accept the simplification of a few leaders and powerful people having quite that level of significance. I think the U.S. is still a much more massive complex system.
Your assertion is sorta like saying that the human body is like an organ because it is basically the heart, brain, and eyes.
We obviously do not live in anarchy, but I don't think our world can be so simplified accurately either.
The article isn't saying that the larger scale relationships exactly match the smaller scale ones. It's just making the claim that the larger scale relationships include all the ingredients that materialists usually identify as sufficient for consciousness.
Individualism (like reductionism more generally) only requires that the higher level explanations be expressible in terms of lower level explanations, not that they're necessarily better (in terms of the insight per unit complexity) for any particular context[1]. It certainly doesn't require any analogy between the people in a nation and the neurons in a brain.
[1] like how an atom-by-atom model of an airplane might be better, even if it would be accurate enough to break it into a smaller number of tension- and shear-bearing elements
I'm not sure why you're being downvoted, as nobody has replied to explain their reasons for doing so, but the idea seems reasonable to me in this context.
I could see taking it a step further: instead of a Gaia theory, by which I assume you mean that the Earth is a sentient system, why not have all things that are causally connected be part of a universal consciousnesses? At what distance apart in space or time or both do we reject that individual components of a complex system could possibly combine to be conscious? Also, do all components need to be of the same type (individual human beings, in the United States example)? Can not other mammals and birds and fish and insects and forests and fields contribute?
The example in the article is a nation state, but is there any reason for rejecting a larger system? If the United States is conscious, does an American who goes to live abroad cease to be a bit of United-States-consciousness and become, for example, a bit of France-consciousness? What of astronauts, or those who will inevitably live on Mars someday?
Is the number of bits of nation-state-consciousness significant? Would China be "more conscious" by virtue of having four times the population? Is Canada only dimly conscious, having only a ninth of the population?
Finally, it seems strange to me that we'd use geographical borders to delineate separate nation-state-consciousnesses in an age when individuals so routinely communicate and travel outside of them. Nation states don't exist in vacuums, but interact and cooperate in many ways that affect each other mutually, for both good and bad. Perhaps all the people who speak a given language could be a conscious entity, regardless of where they live.
This is all fun to think about, but I lean towards the so-called neurochauvanism mentioned in the article and have a difficult time wrapping my head around the concept that something outside of brains could be conscious.
As I said below, if you really are a materialist, then this shouldn't bother you. As a materialist, you can't judge something from the inside, you can only apply the duck rule.[0] And the US, a book club, a couple, all of these deliberate, form consensus, act for self-survival...they act like living things just like us bags of meat do, so they should be "concious", just like a person or a rabbit or a duck is.
Or, if this still does bother you, may be you should question your faith in materialism.
But while materialists may reject consciousness as a fruitful term, there's a problem with finding a functional definition of consciousness -- or even one that agrees enough with what we colloquially understand consciousness to be -- as consciousness, as we understand it, plays absolutely no observable function. Dismissing offhand as an illusion something that is not just perceived to be real but perceived to be the realest thing there is, is just waving away a really hard problem. I can accept that consciousness may be an illusion, but that doesn't make it any less elusive, as illusion itself pre-supposes consciousness.
And suppose that you come up with a theory of when consciousness -- or its illusion -- emerges. How can you possibly test that?
There is a huge difference between, "we can't talk about it in a scientifically meaningful way" and "there is no it to talk about". The former most certainly does not imply the latter.
Coherence of self is demonstrably an illusion, because we have seen experiments of people with severed brain stems.
In that situation the brain halves still insist on there being a self, and insist on knowing the motivations for actions taken by the other brain half, despite not having any direct connection, and often being provably wrong (e.g. researchers have manipulated data, presented them to one brain half as decided by the other, and gotten the brain half to explain its motivations for making choices that were actually made by the researchers).
Similarly, you can sever the connection between the brain and the gut (which also contains a mass of neurons) and the gut will continue to operate independently, yet the gut can affect your emotional state and other parts of what we tend to consider as "self".
We are government by a range of independently operating systems that combine to create "self" in some form.
Note that this does not mean that consciousness have to be an illusion, but the illusion single, coherent unified self is created by and actively perpetrated by our brains, papering over all kinds of holes.
Oh, I am not arguing with that. I'm just saying that the fact consciousness itself cannot be defined scientifically at this point doesn't mean you can just wave it away as "an illusion" without further explanation.
>Coherence of self is demonstrably an illusion, because we have seen experiments of people with severed brain stems.
The most that shows is that coherence of self is sometimes an illusion. You could modify the source code of a distributed database so that the nodes in a cluster were no longer guaranteed to be consistent. That wouldn't show that consistency is an illusion; it would just show that you can break stuff by modifying it. It's hardly surprising that people with damaged brains behave in odd ways.
The point is not just that the brain halves make inferences, but that they explicitly hide that they are doing so, and seemingly (though proving this is hard) believes they are acting from knowledge rather than inferences they have no basis for.
Whether or not they're right in any given claim is besides the point, as at any point past the split that includes an element of chance and is dependent on the extent of outside interference. E.g. just talking to a person with severed brain stem from one side in sufficiently lowered voice is sufficient to provide different information to each brain half and cause them to diverge - coherence is lost pretty much from the first moment; though the extent of it can remain quite low for some types of information for quite some time.
> You could modify the source code of a distributed database so that the nodes in a cluster were no longer guaranteed to be consistent. That wouldn't show that consistency is an illusion; it would just show that you can break stuff by modifying it.
This is a poor analogy. In this case the brainstem is the replication mechanism, not computation.
Take a multi-master database and sever the replication, but let the nodes most of the time see mostly the same data. Now change the database so that it tries to infer what the results actually should be based on patterns seen in the past, so that it actively lies to you about what the basis for the query responses are (the database tells you it is derived from inserted data, but half the time it's computed based on imperfect assumptions of what the other node will have seen).
This is what is actually observed in experiments on people with severed brain stems: You know you've fed bullshit data in, yet the system responds with providing a result with a confidence it has no justification for. You can argue we can't prove that the motivation is to maintain the illusion of self, but the effect certainly is to maintain an illusion of a coherent self - or at least try to. Unlike the database, you can ask each brain half what it based it's decision on, and the deceived brain half will usually insist it made the decision based on x,y,z, even when researchers made the decision without its knowledge.
The only change is blocking communication.
Yet when one brain half tells you it made a decision and explains to you why, you can't trust a word it's saying, as it will present itself with equivalent certainty whether or not it actually made the decision, as long as it thinks some part of the collective actually did make the decision.
Think of any part of your brain as some government spokesperson who is trying to speak for the whole, and who needs to answer questions about statements they personally never made without letting slip that it's total chaos behind the scenes.
But my read is that this article is intending to throw a challenge in the direction of philosophers who advocate a materialist understanding of consciousness, but who aren't eliminative about the concept of consciousness. I.e. they believe it's meaningful to call humans "conscious", and they also believe that the phenomenon has a materialist explanation. This article aims to force them to either drop the concept, or accept that it can be applied very broadly.
Who said that there's no definition provided? as per my understanding of Upanishads which are the most profound works on consciousness:
consciousness (atman/soul) is self-existent, ever-blissful non-material energy. it has no beginning and no end. It never born so never dies. it's the absolute truth because it's not subjected to change, death and decay. Material reality is only a relative truth because it's always changing. Whole of our body cells recycle every year. But we're still aware of the body as before. that awareness is the non-material atman, which associates, itself with this body as long as it's bound.
Yogis for ages have been able to go to the source – atman – by silenting the mind and its modifications (vritis) and have been able to observe the causal, subtle and gross planes of existence along with wheel of time. Hence Upanishad declares: "That which cannot be observed through mind rather through which mind gets the ability to observe is atman"
We materialists want a material definition for this thing you call non-material. What is it. Define it so we can analyze it with our material-limited instrument, empiricism. Otherwise we're forced to conclude that it does not "exist" (is material) and downvote you for being so stupid.
I've defined it already. As far as experience is concerned, there are multiple systematic paths (under different branches of Yoga) to experience this non-material thing which is self only, but unfortunately you can't do it with your very limited material senses. Do you realize the subtle level at which brain operates? when your instruments can't fathom subtleness of your brain, how can such instruments even get the glimps of consciousness which is subtler than mind.
It hardly takes an year or two under proper guidance and full commitment to get glimpses of this subtlemost thing, do you dare to invest?
If you fanatically stick to a viewpoint without even evaluating the other view with an open mind, you risk of repeating what religious fanaticism did with pre-modern scientists in the west.
And regarding downvotes, do you think I care? I could have posted more popular comment in favor of materialism to gain more material points :p
Edit: To learn more, you can read autobiographies of two living yogis [0] and [1]. A succinct yet most authoritative practical guide is Yoga Sutras [2].
I was honestly trying to be facetious before. I'm sorry if that wasn't clear. Text sucks. I actually think it was really inappropriate that you were down-voted. The notion that you should have to produce material definitions for concepts presented as fundamentally non-material is tautologically absurd. It's basically a rude way of suggesting that you shouldn't present that view here. I think, absent some success by materialists, presenting non-material notions of an apparent thing is utterly acceptable. Not only that yours was on topic and directly in response to a comment you apparently read and attempted to respond to earnestly.
Yes, obviously, in that the idea is embodied as neural connections in the brains of materialists. Any phenomenon that emerges from materialistic processes is also materialistic.
You can couple material with complexity such that everything can apparently be material. However if we limit our notion of "material" to what we observe in our universe then we can easily do the same with conceivable forms of non-material. We can even simulate them in our entirely material world using the universal correspondence of the turing machine. The "everything is material" explanation is a degraded form of the "everything is complexity" explanation. I suspect most formulations of an intelligent entity or an "idea" would be non-materially formulated but materially represented in this same way. If you assume materialism is the explanation for everything then yes all known instances of the materialism idea are incidentally materialistic.
Marvin Minsky said something to the effect of "We stand on the shoulders of giants but we quickly jump off them when they found out to be wrong." Ancient wisdom is not something that should be respected in terms of understanding the universe if it is known to contradict it.
A materialist? Which one are you talking about? Marx? Descartes?
And saying that consciousness is a fruitless concept is a bit short, when this very concept is unavoidable when trying to know what is humanity. It's so fruitless that hundreds of books have been written to try to grab a part of it. It is also central to psychoanalysis, and to many major novels of the last century. Fruitless you said?
To say this is quite ignorant of the past few (24?) centuries of philosophy, including those centuries and thinkers when philosophy was part of religious thinking.
Actions are very, very different from beliefs when we anthropomorphize an abstract notion of a race of people. Humanity is a logical 'or' of all its actions. Beliefs, the operation is much more complicated.
You don't speak for humanity. Believe in souls or not most people do and we need to understand that when we use abstract words like 'humanity believes in x'.
Simply naming a logical fallacy isn't an argument. And in fact I claimed it would be fair to suggest humanity believes in souls, I didn't make the claim. Rather, I suggested it is far more reasonable (though not a fact -- "humanity" is an abstract subjective concept with no fixed meaning) to assert humanity believes in souls than that humanity doesn't, a silly assertion the article makes without argument.
Arrgh! This is what annoys me about reading much philosophy. The article is about consciousness which is a term used by many different people in many ways and with many different implied definitions. But does the guy give a clear definition of what he's talking about? No he leaves it vague but waffles on for hundreds of words in a way that you can't say if it's right or wrong because you're not sure what he means.
If you are using consciousness to mean being aware of stuff in a way that you can act on it then the questions are fairly simple - humans are conscious when in a normal state, not so when knocked unconscious. Likewise rabbits. The United States can show collective consciousness in that it's citizens in aggregate can be aware of things and react. If you look at a different meaning of consciousness in terms of subjective experience then probably other people have similar experience and rabbits a simpler version but it's hard to tell.
I don't get why philosophical writing tends to be so vague and waffly. Maybe because they don't achieve much in the real world unlike say neuroscientists studying consciousness and so need to hand wave and be vague to cover up the lack of real content?
His argument doesn't need a definition of conscience. He argues that if A arises from certain traits of B, then you must accept that A also arises from C, since it contains the same traits.
Secondly he clearly refers to a definition of conscience called phenemonology. And one must assume that "materialist" is a term in philosophy that people familiar with the field knows what entails and people not familiar with it doesn't. this paper is not an introduction to either of those terms.
Your complaint is that you don't know the what these words are and that is what is apparently wrong with philosophy.
Thank you. I'd also like to add that philosophy has a long history of pretty dramatic accomplishments in the real world. You can thank philosophy for science and the various modes of scientific inquiry, and the concept of natural rights and liberal democratic government. On the flip side, the Nazi regime was heavily influenced by late 19th Century philosophy (not all accomplishments are good?). Every discipline has an accompanying branch of philosophy that defines it's modes of inquiry and research methods, and philosophical lines of thought infiltrate every corner of the public discourse. It's just idiotic to say philosophers accomplish nothing in the real world.
>I'd also like to add that philosophy has a long history of pretty dramatic accomplishments in the real world. You can thank philosophy for science and the various modes of scientific inquiry, and the concept of natural rights and liberal democratic government. On the flip side, the Nazi regime was heavily influenced by late 19th Century philosophy (not all accomplishments are good?).
If your method of answering questions yields contradictory answers to the same question (eg: "Do individual humans have moral worth? Nazism says no, liberalism says yes -- philosophy!"), then it's not very good.
Philosophy isn't a method of answering questions, it's a discipline. (I mean, do you really think that everyone from Thales to Quine was using a single method?)
"His argument doesn't need a definition of conscience. He argues that if A arises from certain traits of B, then you must accept that A also arises from C, since it contains the same traits."
My reading is that he's actually debunking this argument!
This is also what I find most frustrating about philosophy. Most of it seems to be like mathematics but where you don't properly outline your definitions and as a result the logic is riddled with hard to identify contradictions. Similarly, I think that proving or disproving the existence of God would be pretty trivial if anyone would willingly give a good definition of God.
If you're unhappy with unclear definitions in philosophy you might appreciate Wittgenstein's Tractatus: http://philosurfical.open.ac.uk/tractatus/tabs.html, in which he uses formal logic to present a proof that all moral philosophy is nonsense. It's considered by some to be the greatest work of philosophy of the 20th century.
"The correct method in philosophy would really be the following: to say nothing except what can be said, i.e. propositions of natural science--i.e. something that has nothing to do with philosophy -- and then, whenever someone else wanted to say something metaphysical, to demonstrate to him that he had failed to give a meaning to certain signs in his propositions. Although it would not be satisfying to the other person--he would not have the feeling that we were teaching him philosophy--this method would be the only strictly correct one."
Wittgenstein himself went on to criticize parts of the Tractatus later in life.
It's a bad idea to treat philosophy as a menu from which one can choose the most appealing items and be left with a satisfactory understanding of the problems at hand.
>It's a bad idea to treat philosophy as a menu from which one can choose the most appealing items and be left with a satisfactory understanding of the problems at hand.
Why's that? All moral propositions ultimately rest on either circular reasoning or unjustified assumptions (https://en.wikipedia.org/wiki/Regress_argument, also see the argument in Wittgenstein's aforementioned Tractatus), so searching for absolute philosophical truth is doomed to failure. There's even a school of philosophy based around this approach: https://en.wikipedia.org/wiki/Pragmatism.
I didn't intend to imply that one should search for absolute philosophical truth. In fact, such an idea comes readily into conflict with my statement: it's just the search for an item on the menu that satisfies one completely extended to menu items that have not yet been written.
If you define the success condition for the search to be arriving at the final truth, then yes, you are doomed to failure. That's definitely not my success condition for philosophical inquiry, but now we are getting into why I think philosophy is valuable, which is a bit out of scope here I think, because we both agree that philosophy is valuable. I was just saying that stopping at the Tractatus as if it would satisfy the previous commenter is, well, not good. At the very least one should put the Tractatus in context with Wittgenstein's Philosophical Investigations. The Tractatus is very assailable.
>I was just saying that stopping at the Tractatus as if it would satisfy the previous commenter is, well, not good.
I didn't mean to imply this, rather I meant it as a starting point, to give the commenter an example of a philosophical text that is more formal in its definitions, and hence might frustrate them less. I should probably have made it clearer though that I was just presenting the quote from Tractatus, not necessarily endorsing it.
Well, then it seems that I take issue with your statement purely because I'm a jerk who thinks that the idea that philosophy should leave you less frustrated rather than more frustrated is a bad idea ;)
Yes, in my experience, the religious are fond of saying, "You atheists can't prove God does not exist." Yet it's quite simple to prove that a particular God with a particular set of attributes [edit: does or] doesn't exist.
Interestingly many people (including Gödel) have made logically sound proofs that a particular God with a particular set of attributes do exist. But the way they end up defining the god usually don't fit any religion's god.
Yes, indeed, it's all about defining the attributes. It seems to me that the gods believed in by proponents of extant religions were "created by committee" and don't have completely coherent, logically-consistent, non-contradictory properties.
Gödel's ontological proof certainly derives its conclusion correctly from its axioms. But the problem isn't only that the conclusion isn't really "God exists" if you want that to match the god of any actually existing religion; it's also that the axioms are ... well, let's just say "not obviously true".
In particular, he relies on this notion of "positive properties". It is not at all clear that there is any notion of "positive properties" that satisfies his axioms, still less one for which "positive" is actually a good name.
(The axioms are certainly false if, e.g., we take "positive" to mean something like "regarded as good by some particular person" or "regarded as good by a majority of intelligent and thoughtful people". Gödel wants the conjunction of all positive properties to be positive, which in particular implies that it isn't outright impossible. But it's easy to find properties generally regarded as good that are not mutually compatible.)
What he's getting at is probably along these lines:
I define the "God" I'm looking for as an invisible green humanoid. Because something cannot be invisible and green at the same time, this "God" cannot exist.
It's naturally much harder to do this with Gods that correspond to particular religions, but from time to time it is possible. E.g. a jealous God that will destroy civilization if we don't offer it human sacrifices has not done this, so it can't exist.
Do you mean "which reflects or emits light of these particular wavelengths", or do you mean something about human perception?
Also, similar questions apply wrt "invisible".
If "green" is based on human perception, why would not an entity which, when a person looks in their direction, does not really see them like any shape, but rather, perceives some greenness in that General direction?
Why would this entity not be considered both green and invisible?
Alternative, does "invisible" mean to the naked human eye?
You can determine if those attributes are consistent with each other. Also, if they have observable properties, you have an angle of attack - whether to prove or disprove (or rather, to ascertain likelihood of) the existence of a being with such attributes.
Interestingly, the one branch of philosophy (ethics) that actually has real world applications from time to time also tends to be the most specific in terminology.
This sentiment is shared by many and philosophers are sympathetic. But what you are describing is a rigid logical argument which is only one way of deconstructing something. And this essay would not be as effective or may not even be possible if it were purely logical. The sense that you describe of not being able to tell if a statement is wrong is the agent and it can be thought of as mere intuition. But through the exploration of said intuitions, light is shed onto what was never shed upon before. This is the only requirement of philosophy. They are observational raw thoughts. Of course being so raw has it's disadvantages. It's vague and often unscientific. It's argumentative word play which does have a way of tripping some nerves.
Ummm... If you're going to make an argument and you aren't applying logic to that argument, then it is really not an argument worth having! Philosophy does not invite vagueness because it's philosophy - quite the opposite when I recall taking a philosophy logic class at university.
Being vague is simply sloppy. If you can't write concise thoughts which clearly communicate your intention, then I for one am not going to bother reading it.
It is vague to those seeking logic. To those exchanging intuitions it is not vague at all. The United States is conscious is concise. The logic behind it is weak but the intuition is there, which is what matters. And from here the debate can begin as well as scientific inquiry and logical scrutiny -- all unfolding only to deepen our understanding of what is right or wrong, beautiful or ugly, real or imaginary (basically whatever your interests). Just count the comments here. A job well done.
I think you could argue that physics (and other sciences) are applied philosophy, but they have the same issues with imprecise definitions. Consider what it means to be a "particle" today vs. 150 years ago. Supposing the word had been rigidly defined and everyone stuck to it, "particles" as such have been debunked.
The ability of a definition to change in light of new evidence is an essential quality of a precise and useful definition. Definitions, to be useful, must be precise and falsifiable. That a definition is changed because its earlier version was falsified is evidence of precision, assuming the new definition is also falsifiable with evidence.
In his paper "Why Philosophers Should Care About Computational Complexity" [1], HN-favorite Scott Aaronson resolves to my satisfaction the question of whether a waterfall can be said to be computing the solution to some problem as it cascades over the rocks, because there exists a mapping in which the initial water state is the initial state of the problem, the gyrations of the water "compute", and the final state of the water can be mapped to a solution. He points out that we can with some actual mathematical rigor observe that the mapping itself can be said to be doing all the work.
Similarly, I feel trying to parse conscious intentionality by any known meaning of the term "conscious" out of something like "all the individuals of the United States" is a situation where the mapping is doing all the work.
Clearly, there is something to the "United States" as well as other groupings of human beings. But I daresay in a lot of ways these things are less mysterious than they seem... we deal with these groupings all the time, and if we thereby fail to ascribe consciousness to them, I'd say our experience should probably be listened to. Sure, groups of humans routinely do great things no individual could do, but at the same time, and with no contradiction, groups of humans fail miserably and stupidly too. We've all seen committees that fail to successfully accomplish a goal than any individual on the committee could have, or where the committee remains fuzzy on its purpose (pardon English's anthropomorphization, there) even as the individual members are all quite clear (but divergent). Rather than trying to throw this in the "consciousness" bucket I think it's better understood as, well, the way we all already collective conceive of these things, as human organizations, with their own foibles, characteristics, and properties.
It's not consciousness. It's something else. "Greater" in some ways, and yet, profoundly lesser in others. Trying to view it through the lens of "consciousness" is just anthropormorphism striking again, and I'd say actively harmful in terms of trying to understand the phenomena.
>HN-favorite Scott Aaronson resolves to my satisfaction the question of whether a waterfall can be said to be computing the solution to some problem as it cascades over the rocks, because there exists a mapping in which the initial water state is the initial state of the problem, the gyrations of the water "compute", and the final state of the water can be mapped to a solution. He points out that we can with some actual mathematical rigor observe that the mapping itself can be said to be doing all the work.
This is why panpsychism appeals to me. On some level it makes more sense for me to believe that sentients is an intrinsic property of physical reality then to believe that sentients can emerge from someone moving sticks and stones around according to some algorithm.
The possibility of sticks and stones, their motions, and algorithms are already an intrinsic property of reality, so the things you want already exist in the model you reject.
Maybe sentience is intrinsic to change (someone moving sticks and stones around according to some algorithm). Have you read Giulio Tononi's "Integrated information theory of consciousness"? One of its conclusions is that panpsychism must be true.
> He points out that we can with some actual mathematical rigor observe that the mapping itself can be said to be doing all the work.
There is no mathematical rigor whatsoever in that portion of the paper. He just conjectures that there would be no efficient algorithm for encoding (e.g.) chess positions as states of a waterfall. This actually seems fairly unlikely. You only need to be able to specify a set of physical states that compose in such a way that you can, e.g., incrementally push items onto a stack, and you're basically done. How can we possibly be confident that there is no state description of a waterfall that makes that possible? Note that we have to allow the state descriptions to be very complex, since the physical states corresponding to the computational states of a microprocessor are also extremely complex. This complexity happens to be easier for us to deal with because microprocessors have been designed to make it easy for us to put them into particular computational states. But the relevant states of a waterfall, while much more difficult for us to manipulate, will not obviously be any more complex. And to make the key point, we need only find one suitable inanimate physical system. It really doesn't seem so unlikely that there are a few of them out there.
On top of all this, why should it matter if there's no efficient encoding algorithm? It would obviously lead to an infinite regress if we say that a physical state has to have been "encoded" in order to count as a bona fide computational state (since then the input to the encoding algorithm would itself need to have been encoded).
Right - there's also no efficient algorithm for encoding chess positions into the electrical and chemical states of a couple of pounds of active neurons, and yet living humans seem to be able to play chess.
"He just conjectures that there would be no efficient algorithm for encoding (e.g.) chess positions as states of a waterfall. This actually seems fairly unlikely."
I think you're answering a different question than Scott. You seem to be answering "Can I construct a water-based computer that looks like a waterfall that could solve a chess problem, set up the initial parameters, and read the answer off the bottom?" This is conceivable. It is likely it would involve a highly implausible object of highly implausible accuracy and reliability, but if such an object existed it would still be a polynomial calculation to produce it. (Or, at least, a polynomial calculation could produce an object, if not the optimal one. The lack of solution to the Navier-Stokes equations might prevent you from finding an optimal one. And you might find yourself having some trouble with quantum effects. But such are the problems in the real world.) And our polynomial approximation is still not going to look like a waterfall as it will inevitably need looping constructs where state flows "back" uphill... if you want it to be one-way like a waterfall that is going to be exponentially large.
Scott's question is a different one. Given a real world waterfall that you have not constructed, but simply come across, can it be said to be computing the solution to a chess problem? Is a random rock that is sitting on the ground, jiggling away with atoms in constant motion that all the computers in the world could not possibly provide enough computational power to fully simulate, able to be said to be computing something with all that power? Could a boulder be sitting there simulating a human mind? I don't mean in the pan-psychic sense, I mean, literally, is it simulating a human mind? It has the computational capacity, when considered in the raw.
This is where Scott's argument kicks in, where the mapping is doing all the work.
Further evidence of this is that even if you want to sit here and argue that this boulder that is sitting in front of us (metaphorically) really and truly is calculating the state of your mind as well, well, with another equally sensible (and exponential and impossible) mapping, it's also calculating mine, it's also calculating Attila the Hun's, and it's also calculating the brain states of the final human being to ever live, and also deer's brains and the solutions to world hunger and pretty every other interesting thing ever, really. The contortions required to create the mapping of boulder state to human brain state are such that you can fit literally almost anything into it, and therefore, it is reasonable to point out that it is meaningless. It's an interesting argument that provides a surprisingly rigorous line that allows us to say that, no, that waterfall is not solving a chess problem or anything else... it really is, well, a waterfall.
> Given a real world waterfall that you have not constructed, but simply come across, can it be said to be computing the solution to a chess problem?
Yes, that's the question I was addressing. I'm really can't figure out why you thought otherwise. There's nothing in my original comment about constructing artificial water-based computers.
>This is where Scott's argument kicks in, where the mapping is doing all the work.
It is his conjecture that there is no computationally simple mapping, but I see no reason to believe that this conjecture is correct. He certainly doesn't give any reason to think so in the paper. And to echo jameshart's point, he gives no reason to think that the mapping from chess positions to brain states would be any simpler than the mapping from chess positions to waterfalls.
If you find his conjecture plausible for some reason, that's fine, but you're quite wrong if you think that the paper presents any rigorous mathematical argument. The discussion is much less rigorous and sophisticated than the existing philosophical literature on the topic. (After all, Putnam's original paper contained a proof.)
>Further evidence of this is that even if you want to sit here and argue that this boulder that is sitting in front of us (metaphorically) really and truly is calculating the state of your mind as well, well, with another equally sensible (and exponential and impossible) mapping, it's also calculating mine, it's also calculating Attila the Hun's, and it's also calculating the brain states of the final human being to ever live, and also deer's brains and the solutions to world hunger and pretty every other interesting thing ever, really.
Yes, that's the problem. That's why the computational theory of mind seems to be a bit of a non-starter. Since pretty much every physical system computes pretty much every function, it's difficult to see how thought can arise merely as a consequence of the brain computing a particular function.
Agreed.
The human tendency to anthropomorphize what it should not has enabled priests and witch doctors of countless varieties, in many places and for thousands of years to make a living without producing except for the propagation of misunderstanding. This ignoble tradition is apparently being carried on in the philosophy department at the University of California.
A bunch of me will arrive at consensus faster than a bunch of a whole lot of different people. I'm all the same, after all, so it's easier to come to some common agreement about what this is. Ask a million people what that is, they'll tell you a thousand different answers.
But if you see this as anthropomorphizing then you're not a materialist, or you'll have to show some qualitative, essential difference between the US and the neurons in your brain. There is nothing to anthropomorphize because there's nothing special about the anthropos: the two are either examples of the same phenomenon or they're not.
You sort of ironically demonstrate the exact sort of antropomorphization I am decrying, which is the casual (as in, unexamined) assumption that the only complex state which can exist is human consciousness (or, if you prefer, "consciousness, which is the same thing that humans have"). If in fact there are other complex states that can exist, then "anthromorphization" is the act of forcing the human state onto those other possibilities. The space of complicated possibilities is much richer than the space of simple ones, and there is no a priori reason to believe that just because something is complicated, and that it has some things vaguely human-consciousness about it, that it is therefore human consciouness. Just because a 58-dimensional object contains some portion of itself that is a spherical section does not mean you have a 58-dimensional sphere.
I feel justified in referring to "the" human state because in the grand possibility space of complex systems, human brains are but a point at scale, in much the same way that our little planet is just a dot.
Any definition of "consciousness" that covers all possible complex systems is just another word for "complex system", and therefore useless.
It isn't for me to have to prove that the quadrillion-dimensional-system of the United States is different than the billion-dimensional-system of the human consciousness [1], with wildly differing characteristics between the relationships of those dimensions... I'd say it's the other way around, that those who think it's all the same ought to give a compelling reason as to how it could possibly be the same thing, because on the fact of it it's actually even more mathematically absurd than common sense would suggest. Common sense still has that anthropomorphization built into it, deceiving you, and leading to the ideas of "pan-psychicism" where the only thing we can conceive is consciousness-as-we-know-it so the only question is how much of it something can have.
[1]: My selection of the dimensionality of the human is arbitrary; my selection of the United States is obtained by simply multiplying people x human states. This is a brutal underestimate because the humans are interacting too and that creates further dimensionality, but, meh. It's really just "big numbers".
This is an excellent and thought-provoking article. I'm not entirely sold on the argument -- in particular, there's no reason to automatically assume that any sufficiently complex system is conscious simply because of its complexity. If you looked hard enough, you could probably find lots of complexity in e.g. the interactions between billions of individual cells in a slime mold, but most people wouldn't use that as a basis for calling it conscious. Human brains aren't just an undifferentiated mass of connections that somehow bootstrap themselves to consciousness; they have definite functional units that are genetically determined. Countries have at most a few thousand years of development behind them, rather than billions of years of animal evolution, and it seems a bit implausible that they would have developed the complex processes of consciousness so much more quickly.
We ascribe consciousness to humans by observing their behavior, not the structure of their brains. And it's true that countries do respond to stimuli and act with purpose, but (echoing Chalmers) I think a lot of that can be ascribed to individual people controlling a hierarchy, and not to the collective. If there's anything about a country that arises from the distributed connections between humans, it would be more likely to manifest as broad social trends, not specific actions like going to war. But those general trends seem to ebb and flow for their own inscrutable reasons; they certainly don't show obvious evidence of intelligent purpose.
Nevertheless, the concept is fascinating. And I think the author makes an excellent point that even if it's wrong, the argument is worth considering if only to help us come up with better criteria for what it means to say an entity is conscious.
Good point, and agreed. To me, that's the weakest part of the argument. Basically I would argue that the US is closer to Swampman than to something like a rabbit brain in this formulation:
"One might think that for an entity to have real, intrinsic representational content, meaningful utterances, and intentionality, it must be richly historically embedded in the right kind of environment. Lightning strikes a swamp and “Swampman” congeals randomly by freak quantum chance. Swampman might utter sounds that we would be disposed to interpret as meaning “Wow, this swamp is humid!”, but if he has no learning history or evolutionary history, some have argued, this utterance would have no more meaning than a freak occurrence of the same sounds by a random perturbance of air.[18] But I see no grounds for objection here. The United States is no Swampman. The United States has long been embedded in a natural and social environment, richly causally connected to the world beyond – connected in a way that would seem to give meaning to its representations and functions to its parts.[19]
I am asking you to think of the United States as a planet-sized alien might, that is, to evaluate the behaviors and capacities of the United States as a concrete, spatially distributed entity with people as some or all of its parts, an entity within which individual people play roles somewhat analogous to the role that individual cells play in your body. If you are willing to jettison contiguism and other morphological prejudices, this is not, I think, an intolerably weird perspective. As a house for consciousness, a rabbit brain is not clearly more sophisticated. I leave it open whether we include objects like roads and computers as part of the body of the U.S. or instead as part of its environment.One might think that for an entity to have real, intrinsic representational content, meaningful utterances, and intentionality, it must be richly historically embedded in the right kind of environment. Lightning strikes a swamp and “Swampman” congeals randomly by freak quantum chance. Swampman might utter sounds that we would be disposed to interpret as meaning “Wow, this swamp is humid!”, but if he has no learning history or evolutionary history, some have argued, this utterance would have no more meaning than a freak occurrence of the same sounds by a random perturbance of air.[18] But I see no grounds for objection here. The United States is no Swampman. The United States has long been embedded in a natural and social environment, richly causally connected to the world beyond – connected in a way that would seem to give meaning to its representations and functions to its parts.
I am asking you to think of the United States as a planet-sized alien might, that is, to evaluate the behaviors and capacities of the United States as a concrete, spatially distributed entity with people as some or all of its parts, an entity within which individual people play roles somewhat analogous to the role that individual cells play in your body. If you are willing to jettison contiguism and other morphological prejudices, this is not, I think, an intolerably weird perspective. As a house for consciousness, a rabbit brain is not clearly more sophisticated. I leave it open whether we include objects like roads and computers as part of the body of the U.S. or instead as part of its environment."
I'm quite sure that from the perspective of a cell in my liver, my whole life seem to be ebb and flow. Assuming that the cell would be able to observe and comprehend processes much broader and longer than itself.
It's the same in case of possible consciousness of a large group - as parts of it we will face many difficulties just to notice and understand such phenomenon.
Especially taking into account that it won't be exactly the same type of consciousness as human one.
That's kind of an argument from ignorance, though.
It's conceivable that any kind of system whatsoever is conscious in ways that we can't understand or comprehend. But if the author's position is that the US should be provisionally considered conscious because it takes purposeful, intentional actions, then a lack of apparent purposefulness seems like a reasonable criticism of the argument.
I think there's a fundamental confusion in the article between goal-seeking behaviour, and between communicable self-awareness.
Clearly, large collectives of people can seek goals. Large collections of transistors can also seek goals, up to a point.
But it's impossible to communicate directly with the United States as a self-aware entity, or to have a direct conversation with a collective.
You can communicate with a representative of the US, but there's no way anyone can talk to, or email, or Skype, or send a paper letter to, or have a telepathic conversation with, any entity that would consciously describe itself as "The United States of America."
This matters because you could ask ten representatives of the US for their opinion on something and get ten conflicting answers - without essentially damaging the concept of "US-ness."
This clearly doesn't match the definition of a unified consciousness. It's not the same as a single consciousness changing its mind, because there is no recognisable single mind that changes.
What about insect colonies, animal herds, bird flocks, and corporations? They simply amplify the goals and personalities of their leaders. I'm not aware of any instances where - for example - a separate corporate mind made its wishes known to override board decisions.
(You could possibly argue this is what happened with Reddit. I'd say no - that was a conflict between factions with different goals, not evidence that there's a metaphysical Reddit-mind independently placing conference calls and tweeting to steer Reddit's future.)
> But it's impossible to communicate directly with the United States as a self-aware entity, or to have a direct conversation with a collective.
Would you expect one of your cells to be capable of carrying a conversation with you? ("No.") Then why would you expect a "cell" (citizen) of the United States to be able to communicate with it?
> This matters because you could ask ten representatives of the US for their opinion on something and get ten conflicting answers - without essentially damaging the concept of "US-ness."... It's not the same as a single consciousness changing its mind, because there is no recognisable single mind that changes.
You could also stimulate ten neurons separately and receive 10 differing responses. And when a person changes their mind, there is also no recognizable single neuron that has changed.
The first point is begging the question. Clearly, humans and many animals communicate.
Do countries communicate with each other in similar ways? They can appear to. But in fact there's no communication independent of the individuals who represent the countries. The entities called "Russia" and "United States" are wholly defined by the contents of the embodied minds of their representatives.
There is no way "United States" can change its mind during an international negotiation independently of any of its representatives. If it did they would suddenly stop pushing one line and start pushing a different line for no obvious reason.
I'm not aware of this ever happening.
Compare that with human and animal communication. So far as I know, my self awareness isn't defined by a shared description and belief in "Me" across all my neurons. If you pull an individual neuron out of my brain it won't have any concept of anything, never mind of "Me."
So the two processes are completely different. One is the flocking behaviour of (semi)intelligent agents.
The other is an emergent property of units that have almost exactly zero intelligence and awareness individually, but somehow combine to produce something that has much more.
There is no way "United States" can change its mind during an international negotiation independently of any of its representatives.
This is provably false unless you are adding hidden assumptions. It is easily shown in the case of some bureaucracies that a person will have a hard time getting something out of them, even though no single person will claim to be the one stalling the request. From the outside, the bureaucracy behaves as if it has a different goal than the (stated) goal of each individual.
Communications are regularly attributed to the United States in law and diplomacy, and the humans who carry out those communications will often have personal views that are inconsistent with the state's position. The concept of a 'board decision' itself implies the attribution of intention, which is one aspect of consciousness, to a corporation. Any board resolution that is carried by majority reflects the will of a 'separate corporate mind' overriding the conscious decisions of the dissenting directors.
I disagree. You're still confusing goal seeking with consciousness. You can easily automate goal-seeking, so goal-seeking and intention are not nearly a sufficient condition for consciousness.
On your basis you could impute consciousness to any cybernetic system that takes a majority decision - such as the navigation systems on the old Shuttle.
And just because communications are attributed to a nation, doesn't mean you're dealing with anything more than a diplomatic convention. In practice you're still dealing with leaders and representatives, and the leaders will set policy.
Without the leaders there is no entity - and in fact it's also known in diplomacy that you can decapitate a country simply by killing its leadership, or define a new country by changing its leadership.
A change of leadership creates a change of identity and intention, even though the rest of the things it does remain broadly unchanged.
Compare that with human brain, where there are no "leadership neurons". There are broad areas of the brain that integrate experience and are involved in making decisions, but you can't point to one neuron and say "That's the president", or to one group and say "That's the ruling party."
More, there's no self-awareness. These arguments are kind of pointless without a final definition of consciousness, but it seems likely to me that entities that act in conscious ways have some internal representation of a persistent self which is perceived - somehow, in a magical way we don't understand - as a unified self-identity.
So you need two things for that. One is a persistent representation of self. And the other is the ability to experience that representation as a singular self-definition.
Humans are embodied, so we know what experience is. Corporations, countries, and bird flocks aren't.
If you invade a large land area, remove a source of profit, or trap half a flock of starlings in a net, no singular embodied experience corresponds to that loss. You can predict goal-seeking behaviours, and you can find individual responses to individual circumstances. But all of those fail the requirement for a single unified self-aware change in state.
Otherwise you'd have to argue that countries somehow feel pain when they're invaded. Citizens may feel pain, especially if they're killed, injured, or made homeless. But countries?
I remember thinking about this after reading Godel esher bach. The reason I don't believe it's clearly probable is that it's hard to imagine that a system that hasn't been forged under as much natural selection to have much intentionality. And it's hard to imagine consciousness without that.
It's also with considering that the US could be sliced along arbitrary boundaries and it probably wouldn't change too much. That alone seems like such a structural/functional difference that arguing for the consciousness of a system like the US on simialriry-to-brains grounds seems difficult.
Still, the strangest thing I've considered is: suppose that the US, or some other group we are a part of, like the galaxy, is conscious.and we get good evidence that it's true. Then disturbing it's function, or destroying it could kill that consciousness. It would be like killing God.
Oh yes, with the ant colony called Aunt Hillary :) That and The Mind's I (an anthology edited by Hofstadter with Dan Dennett) were both a big influence on my thinking about this. I absolutely agreewith thepremise of this article and I've wondered for years why people like Roger Penrose insist on consciousness being the result of some hidden process at the quantum level rather than accept the possibility that consciousness is an emergent property of networks. I would need to go back and reread his Mind books now to re-familiarize myself with his arguments, though, I've forgotten the details by now.
Still, the strangest thing I've considered is: suppose that the US, or some other group we are a part of, like the galaxy, is conscious.and we get good evidence that it's true. Then disturbing it's function, or destroying it could kill that consciousness. It would be like killing God.
I absolutely believe in universal conscioussness (as instances of networks across many different substrates running at different speeds and with varying topologies and so on). But whatever you do will disturb its function, and it will be relatively trivial in the overall scheme of things - to quote Peter Greenaway 'A great many things are dying and being born all the time.' If you can conceive of an entity like 'the United States' or 'humanity in general' being conscious on a different scale than oneself, then one can consider oneself as the superposition of a variety of different ideas of varying intensity, as a multiplexed signal whose sample rate is a function of the underlying cellular processes. 'You' may not be the sum of your networked parts, but rather the algorithm that defines the travesal of said network.
>Still, the strangest thing I've considered is: suppose that the US, or some other group we are a part of, like the galaxy, is conscious.and we get good evidence that it's true. Then disturbing it's function, or destroying it could kill that consciousness. It would be like killing God.
Obviously the most fun thing to think about. It's worth noting that "consciousness" is on a sliding scale from dim awareness to focused lucidity. Is my dog conscious? Probably not, but there's something there. How about a gorilla? The lines start to blur. Is it outlandish to ascribe to a large, distributed group, a fish-level of awareness?
> "consciousness" is on a sliding scale from dim awareness to focused lucidity
assuming that all consciousness resembles that of humans. I'm guessing our tiny brains are capable of only a thin sliver of what is possible when it comes to apprehending and interacting with reality.
Assuming you're really awake and having dreams as opposed to your thoughts being reorganized in a way that you simply remember as vivid, real experiences. Even if you're conscious, that's only a tiny part of sleep (often REM). Being awakened otherwise is like just having missing time where there's a gap in conscious experience. As if it was off.
>> I remember thinking about this after reading Godel esher bach. The reason I don't believe it's clearly probable is that it's hard to imagine that a system that hasn't been forged under as much natural selection to have much intentionality. And it's hard to imagine consciousness without that.
I too thought of G.E.B. wasn't there a story in there where someone was talking to an ant colony? Not the ants, but the colony as a whole. I've long considered the similarity between cells:people and people:society. A society can have behaviors and a sort of personality. Consciousness seems a stretch, but not impossible. The thing I tend to think more about is cancer, or the fact that a person can pick a scab or that people with various psychological issues can harm themselves or parts of themselves (see cutting, eating disorders, etc).
Interesting that you brought up natural selection, since arguably countries do have some sort of natural selection, but it's fairly limited since they are restricted to some geographic area limiting the competition to neighbouring countries (at least until recently).
However the same restriction don't apply to corporations, and indeed those are heavily competing over their means of existence (money). Which gives an interesting perspective to the current tendency of large corporations to improve their own chances of survival by dodging taxes, influencing politics etc. In some sense it does seem like corporations have become 'intelligent' and are attempting to change their environment to suit them. I'm not too sure what the end result will be, but I fear it's unlikely to be beneficial to humans.
To stop this we may have to, as you say, kill "God".
The claims about the U.S. should be able to apply to a corporation even better given the tighter organization, more clear intent, and how they collectively more in same direction more. Maybe he should write a paper on them. They're even legally classified as persons. ;)
I like the idea, though with corporations it's clear where the intentionality comes from: the CEO, the board, etc. It doesn't seem to "emerge" as much from lower level entities à la neurons. Not that that invalidates a group-level consciousness of course...
I would actually be fairly surprised if there weren't some neuronal mechanic that regulates and reinforces group thinking. I have my doubts that it is entirely the individual consciously observed and consciously declared will that constructs collective direction. Even little tweaks ... 'is this idea going to go over tomorrow in meeting, nah, better do this instead'. There is so much thought that goes on that is regulated partially by how happy one feels, how validated and accepted one feels. It doesn't necessarily mean it is in the best interests for the company, the country, or the individual. It just what they choose to accept as most likely true at the time of having to make an assumption about the way things are, and what that means for how one can direct oneself.
Since the author established that the United States is a linguistic entity, then why doesn't they simply ask it. "Are you conscious?", "Do you have subjective experiences?". Surely whomever you direct the question at will laugh at you and no one would take you seriously.
A more plausible way for the US to take this type of question seriously is if it was indeed asked by a planet-sized alien. Then I expect the authorities in the US to come up with some traditional explanation of countries and how the US is one. And would reject the fact that it's conscious. If an entity doesn't believe that it's conscious, then can it be conscious?
I find it strange that the author didn't entertain such line of thinking since, to me, it makes the most sense. The first thing you would do when you want to know something about a language-capable entity is probably ask it. He mentions the Turing Test with regards to testing the consciousness of the supersquids, but why not the United States?
> Surely whomever you direct the question at will laugh at you and no one would take you seriously.
But, then, do you consider that the entity may actually lie about it? It may be psychopathic enough to say, "I am not conscious, that's ridiculous", and yet be conscious.
Another possibility is that it wouldn't understand words in the way we understand them. Asking a nation state "are you conscious?" may be like asking a child "do you have knee-jerk reflex?". How would you know without prior experience?
However, asking in a different manner, say like in a referendum in Greece, you can get a very emphatic response which hints at consciousness at national level.
Sure. But when did the US ask itself that question? (assuming that -- in accordance with the paper -- when the US "speaks" it's the officials that speak for it)
>If an entity doesn't believe that it's conscious, then can it be conscious?
I'm sure there are brain-damaged people who have a single bit stuck flipped causing an aversion to talking about that concept and fully claim to not be conscious but are otherwise high-functioning and really do have the same sort of internal experiences as the rest of us. Classifying them as "not conscious" wouldn't be useful at anything besides making them stop arguing with you. At that point, "conscious" is just a word for people who meet some minimum intelligence and happen to also call themselves "conscious". We might as well use any other word.
I don't think the word "conscious" has enough of an agreed-upon definition for this question to be even as productive as the classic question-that's-really-about-definitions "does a tree falling with no one around make a noise?". I think the question "is X conscious" is mixing many different questions together. Does the US have a subjective experience? (Who knows. I think we only assume other humans even do because we realize we're probably not the one special person to have one, but it's hard to generalize that solution to things very physically different than ourselves.) Does the US think like us? (It clearly takes a stupidly long time to come to any decisions and then changes its mind every 4 or so years on certain things.)
Just for fun as a little tangent, here's an absurd example possibly closer to the US: imagine you have a person that acts just the same as any other, except somehow their neurons in addition to carrying simple signals each have human intelligence and awareness and rich inner lives that they mostly keep private, but when you ask the person if they're conscious, their neurons stir from their own thoughts at hearing a familiar word and break protocol by answering individually instead of following their proper neural behaviors to allow the person to answer the question themselves. (If that seems a little too disconnected from reality to even picture, then it might help to imagine a natural born human who undergoes a surgical process involving nano-machines that replaces their original neurons with the described self-aware neurons one at a time.)
>imagine you have a person that acts just the same as any other, except somehow their neurons in addition to carrying simple signals each have human intelligence and awareness and rich inner lives that they mostly keep private, but when you ask the person if they're conscious, their neurons stir from their own thoughts at hearing a familiar word and break protocol by answering individually instead of following their proper neural behaviors to allow the person to answer the question themselves.
This, actually my other objection to the paper. It seems like centrality is important for the concept of consciousness (I agree that it means a hundred different things but it's convineient to talk about as a single thing). Most people would identify as the person behind the eyes (or generally in the head). And the fact that all experiences of the body is experienced in that central place makes it quite hard to imagine that the individual parts can also experience things on their own. Let alone, be able to disagree with the central thing.
A collection of entities or the pattern the entities partake in are NOT self aware. It's only the entity itself observing the pattern or collection as such, that is.
It seems people are postulating properties of collections and patterns of groups, then checking things off, one by one, being satisfied of the similarities. It is the person checking these things off that is realizing similarities then taking the leap of anthropomorphizing awareness into that idea.
The salient difference for me here is that the brain has a holistic factor:
Individual brain cells are not self-aware, or intelligent in any capacity. A whole brain is both.
Individual humans in The United States are aware of both themselves and of the United States. The United States is arguably not aware of itself in a greater-than-the-sum-of-its-parts capacity, as a human brain is.
(So far I've only read the abstract of the linked article)
It's a fun thought experiment, but the illusion of self probably didn't just spontaneously arise in humans; it's much more likely to be an adaptation from hundreds of thousands of generations of natural selection for a more intelligent species. Nature selected countless times, through genetic succession, for the more aware species.
> do you think that brain cells are (would be) aware of the fact that we are conscious?
No, a neuron is too simple a system to be capable of awareness by itself. I get what you're getting at, but if we're talking about a higher-level phenomenon, let's not also call it consciousness. In fact, we essentially use "consciousness" as a post-facto description of our own illusion of self, so it essentially makes no sense to ascribe it to a system that didn't arise more or less the same way as we did.
A related question I like to consider from time to time: "Are corporations conscious?" They are composed of many individuals who we consider conscious and they gather a whole bunch of information on their internals. They even have entire departments of conscious beings devoted to communicating information to other conscious beings that often speak as if they were the corporation itself (sure, corps may have a strange habit of referring to themselves in the third person, but so do some people). How would we even measure this?
It's weird you're the first comment (from a brief scan) to mention "measurement". All of the obvious tests are fraught with ethical issues, but testing for behavioral responses seems pretty easy, I think. For instance, consider this thought experiment: set up a rogue, landed, state; have that state attack the U.S. Can we predict a response? Surely! So, we know the U.S. responds to very simple "acute pain" stimuli. We know it responds to chronic pain stimuli: if a certain economic activity is no longer justified in a subregion (Detroit:cars), then that region atrophies.
Really, I think the major issue is to determine how to talk to the U.S. What's the sovereign state equivalent of "how's it going?" It seems that the only obvious attributes of the state we to acquire resources---usually by cooperating in s semi-symbiotic fashion with similar states, and responding to basic pain-like stimuli. The U.S. seems to act more like a very simple worm, and less like a higher animal. Perhaps it has primitive organization? (I'm thinking of the larger jelly fish which, while massive, are cognitively simple.)
States are another fun one, thought rather terrifying to think that congress in some strange way might be equivalent to prefrontal cortex. One of the problems we face here are identifying the information channels that are actually important for driving behavior. In some cases hours and weeks of talks between high ranking officials in various foreign services do nothing compared to a 30 second conversation on a street corner.
Or phrased another way: we still have no real solid leads on consciousness. There's all sorts of neat things thought up about it, we have some questions we'll want to answer or dissolve, but there's not even a consensus on what consciousness is.
This is really disappointing, because consciousness is neat.
I also find it questionable to posit fake scenarios then try to draw conclusions from them. I can say "what about a fire that boils water, but it's actually frozen?", but it doesn't really mean such a thing is possible. See also p-zombies, an exercise in imagining nonsensical things in order to draw questionable conclusions.
I don't think that philosophical zombies are a nonsensical line of thought at all. As Ray Kurzweil states in one of his books (paraphrasing), "I take it on faith that the universe exists". Accepting the inability to receive proof of others' consciousness is a major battle in the quest for developing a rational view of the subject.
The pzombie line is "posit the same world as ours, but no one's conscious". At best, it's useless because we don't know what consciousness is. At most likely, it's just dumb and non-sequitur. Like saying "OK imagine this glass of water. It's identical to water in every way. But it's fire. Ah-ha! What does that mean? <spooky voice>".
So either consciousness is based in physics, in which case pzombies are non-starter, or we don't know what consciousness is and it's a pointless conversation.
Seems like common sense to me. Consciousness is the time between when you are born and you die while your brain is in proper/typical working order (e.g. when you can remember and relate and learn from memories).
I don't accept consciousness as anything of real value as far as intelligence goes. There's no reason anything should want for consciousness. It's an abstraction layer our higher order has to the lower order, and emulating it is senseless. It's merely the childish desire to make something that resembles us. True, pure intelligence would be something so mechanically terrifyingly above anything the human mind wouldn't even register it as intelligent. Consciousness isn't worth emulating, it's just the crappy UI evolution gave us.
In the same vein, I see emotion associated with AI so often, and that is so frustrating. Emotion is the opposite of intelligence, it's a series of global variables the left over parts of our brain use to influence the parts in control, it's a bad system and need not be copied.
If there is a god, this is how I imagine it. Free from the flawed systems we live and deal with, free from emotion, free from compassion, a horrifying being of pure logic who you could never begin to comprehend
I see our various emotions not as the opposite of intelligence, but rather as high speed, roughly-tuned, very imprecise guesstimation engines. Handy where our basic intelligence is incomplete (infancy mostly), when our information is incomplete (to aid guessing) and when we need extremely fast reaction times.
To all those who aren't yet sold on this idea, consider it like this: what if consciousness is not a consequence of organised systems, but rather a property of all matter. Who's to say that the slime mould doesn't have consciousness? To get all new-age and spiritual about it, maybe this could extend to any level of organisation, and the very act of conversing and connecting with other people is an expression of collective consciousness.
The way I see it, this idea is to the traditional view, as the traditional view is to solipsism.
An atom is not "red" or "warm". There are properties that are only properties of an organization, not of the individual.
I think a lot of philosophical hurdles will be gone when respecting boundaries more. In this case not so much physiological or neurological boundaries, but "(machine) learning" boundaries. A system is separated from its environment by a Markovian boundary. If not, it would not be able to built up a representation of "that out there". It would "be it", but not "represent it". I think much of the questions around "self" and "consciousness" can be solved by postulating mechanisms that use such Markovian boundaries within the (artificial) brain.
Information requires a transmitter and a receiver. These can be the same entity separated through time. For example by writing something down that you read the next day the two of you (your old and new version) communicate with each other. But, separation, physically or temporary, makes a system more than some kind of chemical soup.
Anesthesia practically eliminates consciousness yet the brain states are probably more similar than brains and the United States. I'm not sure I agree that apparent similarity is good enough to conclude that the US is probably conscious under a materialist framework.
Definitely possible, however. And fun to consider whole galaxies exhibiting consciousness.
> fun to consider whole galaxies exhibiting consciousness.
Yeah, but a slow consciousness that would be, unless information can travel at speeds faster than light. If not, then there is a limit to how large a computer can be on account of the speed of light and a few other physical limitations.
I realize Mr. Schwitzgebel may have presented a conscious U.S. as a deliberately ridiculous idea to illustrate his point, much as Schrodinger did not intend his cat to be taken seriously. But as a card-carrying Science Fiction nut, I have to say, I find the idea plausible or even intuitive. Of course there could be a slow, broad form of consciousness that inspires extremely large and complex systems. It would live in the interactions and emergent coincidences of public and private life and it may even spill over into our more complex inanimate systems. It would certainly use them as tools or limbs.
I propose a ridiculously expensive study with a huge grant wherein I give the United States an aptitude and personality test. Based on the results, we could attempt to find this 239 year-old citizen some rehabilitative help to reduce some of its more antisocial and criminal tendencies.
Decided to switch out the actual title of this one ("If Materialism Is True, the United States Is Probably Conscious") for fear that it could seem clickbait-y. In fact I think the actual text is very lucid and thoughtful; for whatever reason, philosophers just seem temperamentally inclined toward "cute" titles like that.
I just want to comment that I only clicked on the article after reading your comment about the original title, and I feel that the edited title less accurately reflects the material.
You added your opinion to the title, rather than leaving it alone.
Ed:
In fact, your title less accurately reflects the material of the paper, which specifically deals with:
> Finally, the United States would seem to be a rather dumb group entity of the relevant sort. If we set aside our morphological prejudices against spatially distributed group entities, we can see that the United States has all the types of properties that materialists tend to regard as characteristic of conscious beings.
Your title completely omits the key focus of the paper, and its examination, which the original title included.
You did this because you disagreed with the paper, and used the pretense of the nebulous notion of "clickbait" to edit in your disagreement.
The original title was more accurate as to the article contents.
I appreciate the note, but at the risk of being offensive you took something that was quite reflective of the style and content of the article and made it sound dry and academic -- almost research-y. It may have outright reduced the clicks to the article; at the very least, it probably changed the sort of person who read it.
Since several users have made a good case for the existing title, we've reverted it. If it were obviously in violation of the rules (misleading or clickbait), we wouldn't do that, but this is one at most arguable.
I think samclemens was genuinely trying to follow the guidelines, though, which is good.
Perhaps ironically, the post is at #2 anyway, which is way high for this sort of thing.
I've never been a fan of anthropomorphism nor of overloading terms (with additional definitions). Everything descends into a primordial soup when this starts happening. And it is very hard to predicatively and rationally operate in a bowl of primordial soup where one thing is not distinct from another. Clear and crisp definitions and detailed distinctions are what makes higher order modeling possible. So no... societies are no in any way "conscious", even though they may at times act in ways similar to conscious entities.
I have a similar view, that a society (not necessarily a political concept like country, but more culturally defined group of people) is the ultimate intelligent being. And the main rationale behind that notion is that society DEFINES human intelligence. We think human are intelligent beings but as a matter of fact, every capabilities that we deem signs of intelligence, such as language, logic, math are given to the human by the society. Suppose we have a unfortunate person who somehow is raised by a dog without any direct or indirect contact from the human society, he would demonstrate very little intelligent superiority over other advanced mammals - no language, no culture, not even logic.
As pointed out by the article, society as a distributed group entities demonstrate signs of consciousness, but that's not even the key, the key is society defines every single consciousness of the members in it: how we think, what we want, etc. A single person's intelligence is simply a tiny subcomponent of the ultimate intelligent being - the society. How intelligent we are are mostly determined by the society (doesn't mean that everyone thinks a like), and the exciting thing is that some of us get to contribute some improvements to that ultimate intelligent being.
Society may be great at transmitting knowledge but it doesn't create knowledge -- people do. Even the thoughts you've just written down, are your own, not societies. I can even disagree with your thought and we can throw it into the gutter.
Society may know how to create fire, but it's one man who created for the first time, and it's the choice of just that one man to pass it onto another (and yet another). A man on a island may certainly discover fire.
Yes, but a lot of the knowledge you rely on to construct complex concepts were taught to you at some point, they didn't just manifest in your mind independent of society. Every person may add a little bit, and that little bit each person adds may be only a very small bit outside of the collective knowledge that would exist had they not existed.
I guess my point is that how we have learned how to attribute and locate the source knowledge doesn't necessarily describe how it is actually created, moves, and is altered.
> to contribute some improvements to that ultimate intelligent being.
That's fair, I'm just turned off by the language of "ultimate intelligent being." Perhaps that's just my human bias, but I have trouble seeing society as anything greater than its parts, primarily people and knowledge.
I didn't read it that way, but I wouldn't have worded it that way either.
Humans are the ones that decide what intelligence is to them. We've seen many times in history that this is in fact, very unintelligent to do. But sometimes it seems to work splendidly. The individual can define him or herself as the most intelligent while everyone agrees. That really doesn't necessarily mean anything beyond everyone agreeing that they are intelligent. And some people may as a result, choose to entertain their consciousness in other ways, in order to direct it on a different course. And this may wind up being more intelligent. And then we pretend that nothing weird happened and we knew all along what real intelligence was.
:)
I tend to view things as absurd before I view them as intelligent, but my life is fairly boring.
I have to ask how could I express any experience, without society? Without society there is no language I can speak. No knowledge to rely on. No computers to write. And no Internet for sure.
Oh, come on! It has nothing to do with materialism but everything with complex systems and what we call "ecosystems" - with the stochastic-within-fundamental-laws nature of what we call universe or reality.
Ecosystems could exhibit what appears (to an external [hypothetical] observer) to be a conscious, "intelligent" behavior due to obeying to underlying "forces" or "laws" - physical, environmental (herd behavior, etc.) economical (energy expenditure, diminishing returns), evolutionary (which affects populations).
Forest or town formations (which appears to be "optimal", "designed", while they just grew up causally out of individual "processes") are obvious examples.
Financial markets could be [falsely] viewed as "intelligent", while it is just a stochastic individual actions and "herd behavior".
Ants or bees colony, a big city at night, viewed from an aircraft - they all appear to have consciousness of its own, but no, it is mere an appearance. Nevertheless we cannot assert that these formations are purely chaotic - they are shaped by a chance, but in accordance with the underlying forces and laws which govern (or limit) behavior of individual "agents" within the system.
Just like all these atoms - they have their positions due to multiple causes (stochastic processes), but in accordance with fundamental laws of what we call "gravity", "magnetism and electricity", "conservation of energy", etc. There is no "intelligence" or "consciousness" apart from that, or That, as a Hindu would call it.
I am a dualist and I ought to accept that rabbits are conscious. They must have a soul of some sort to be able to have the Easy Problem of Consciousness between their ears. They see, they hear, they do lots of things. But I don't and I can't accept "consciousness in spatially distributed group entities."
The author, judging from the abstract (it's 7AM here), has it up-side-down. One has to first prove that spatially distributed complex entities (like an atom, molecule, the brain or a pen) has phenomenal experience, that is "there is something that it's like to be a pen". Then she can prove materialism is true. But to start with materialism being to true and then... well, who am I to judge.
Some say that materialism is refuting itself, in the sense that proving it true dissolves any kind of truth into non-sense.
Why would the null hypothesis be that there is a supernatural force allowing human brains to work? In any other
phenomena do you say it's magic until proven material?
I don't think we have, so far, a clear definition of what's 'material'. Thinking of the centuries old question 'why is there something rather than nothing', I believe magic and natural don't make much difference. Even an agnostic and occasional religion bashing deGrasse wonders in face of the mathematically describable world. Isn't it magic that make us wonder? Hence, supernatural is not such a despicable minion concept.
Talking, arguing for materialism also means that one separates materialism from 'the rest' or 'the other ways of seeing the world'.
This means that anyone calling themselves a materialist must also have a concept of this 'rest'. But what the heck is it if you fully believe in a materialist world?
I made a serious argument once that god(s) have a mind, with some of the same rationale. Many individual people contribute parts of their brain to a whole that has will, makes plans, seeks goals, has moods, etc, where each of those cognitive states do not belong to a single individual. Some individuals have more influence, but there are few religious leaders who couldn't be excised for rapidly turning against the supermind. This isn't supernatural, and it isn't mystical. It would apply equally to the Market as to the Government as to God.
But like many flights of scientific fantasy (and like the article) it also isn't very predictive, it isn't verifiable, and it isn't ultimately very useful as a model.
I abandoned the idea when I recognised it as otological onanism.
Has the author really never heard of group consciousness? This is not a terribly interesting argument, and certainly not one that counters the materialist perspective. It reminds me of a young earth creationist I once heard arguing that since the moon is slowly drifting away from us, then at one time is would have had to be far closer if the Earth were really billions of years old. In his mind that was an unpalatable conclusion, and thus sufficient to close the case. Anyone else would have just nodded his head and said, "yeah, and...?"
I was thinking about something similar to this idea recently, but replace "United States" with a large corporation.
In a sense the United States is conscious, but the "experience" of being the United States, or Exxon mobile, or Google, is so far removed from the experience of being a person or even a rabbit that it doesn't matter. i.e. It's a metaphor more than anything else.
We take in information through eyes, ears, etc made up of cells. We think in a brain, experience emotions, inhabit a body.
The United States takes in information through organizations, people, computers, etc. It thinks and makes decisions via all sorts of different systems and processes. It doesn't have a physical body, it has different parts and material all over the place made of many different elements.
Perhaps it's true we only value forms of consciousness similar to our own.
I don't know, I don't philosophy well. This paper was thought provoking.
I don't get the impression that Schwitzgebel takes analytic philosophy very seriously, which is something I find very refreshing about him (compared to other philosophers of consciousness such as Chalmers). His early interest was on ancient Chinese philosophy, and in http://www.faculty.ucr.edu/~eschwitz/SchwitzAbs/ZZ.htm, he promotes what he regards as an ancient Chinese outlook about not taking language too seriously, which contrasts with traditional Anglo-American philosophy.
The article and comments miss what I think are tremendously important issues in this discussion: collective memory and group think. People in groups often take on a particular culture with group-specific reactions, activities, and memories. We've seen people get internally divided between their individual thought/reaction and the group thought/reaction on a given experience. Further, although author rejects U.S. feeling pain, Americans as a whole reacted with much of the same pain and shock when they watched planes hit the Twin Towers. Their emotional response mostly went in the same direction with the conflicts coming in on their assessments of what happened and what do do. Much like internal to the brain, conflict resolution and consensus kicked in to converge majority on an overall reaction that all Americans were conscious of.
So, groups and even America often act like an individual might even in terms of feelings. The groups memory is spread out among parts of its members. New members, like the author's aliens, even get some of that memory through information exchange, stories, sharing feelings, and indoctrination. Their mind can transition to a cross between individual mind and the group's mind.
So, brain stores its knowledge in neural connections and states updated by certain processes. Consciousness emerges. Details of it have varying levels of internal strength, conflict, and so on. Human groups store their knowledge in individual brains and connections between them via senses. Group's consciousness similarly has varying levels of commitment to specific ideas or actions with conflict. The question is, "What's the real difference here and how far can it go?"
If it can go far enough, then the U.S. might be conscious, have a collective memory, have the ability to feel pain collectively, and have an intent formed of members' consensus. That's stronger than the author's own claim. Yet, I think I've cited examples to back some of it up. The U.S. itself doesn't need a brain: its identity can be made of pieces of other brains, both storing knowledge & having feelings, plus their interactions with each other.
Note: I'll end with a possibly controversial view that I don't think all of U.S.'s members make up its consciousness. I think it filters out a lot of them. It has it's own self-organization and learning principles that are quite a bit different than the brain's aside from basic concepts of sensory processing, reflection, conflict and consensus.
I've had a similar suspicion for a while. Superorganisms of micro-organisms are a place we find consciousness; superorganisms larger organisms (like us) seem at least as likely to be conscious, if not more so, given that consciousness is already present in the components at some level.
One should really look into works of Russian publicist Sergey Pereslegin. He (plus other people) produced a lot of fascinating speculation on such concepts (superorganisms composed of humans and organizations)
Consciousness is an appearance, same as when an external observer sees an "intelligence" in behavior of a whole colony if bees or ants or, which requires special tools, bacteria.
One of CMU's AI Guru (I forgot his name), back in 60s, described brilliantly a principle that "incredible complex" observable behavior of an ant is not due to its "intelligence", but due to obstacles in environment. This is the clue.
There are "meditation" techniques to observe "discrete" (non-continuous) nature of what we call consciousness.
May be I'm missing something. This doesn't strike me odd at all. I mean, one could say the United States chose to legalize gay marriage, or chose to forge a deal with another concious entity Iran. This is similar to a person making a change in their own body or making peace with an adversary, respectively. One could liken the democratic process and the political system as concious thinking, perhaps.
How is this any different? If you relax the definition of "concious", then why isn't the United States concious?
Can you provide a definition for consciousness? Perhaps you have a different notion than most people keep in mind (even if they are unable to explain).
I'm using a very simplistic definition of the term. Essentially, anything that responds to input is "alive." I'm actually basing it on the notion the author gives since he considers rabbits as being "concious." I'm not sure rabbits are self-aware though in the sense that humans are, ie., becoming a "strange loop" as Douglas hofstadter would put it. Self-awareness is one thing, conciousness, as the author presents it, is another thing.
The thing is, externally, one can talk about a nation in the same ways we talk about a person in the same ways we talk about a group or even a couple. You can consider it responding to input and even being "self-aware" as an entity, even being deliberating. A materialist doesn't believe in the immaterial, s/he can only judge an entity by observing its reaction to stimula, so as far as s/he can judge, the materialist can observe the US from the outside, and apply the duck rule[1] to conclude it's concious.
However, I'm not talking about qualia[1]. Qualia, a materialist would say, is an illusion. But, again, going by the author's definition, I'm not sure we can say whether rabbits or flies experience qualia, and qualia is sort of a hard to thing to reason about from the outside of the head. For me, the question of qualia is a much more interesting argument against materialism.
Disclaimer: I'm not sure I'm a materialist, but I'm a scientist by trade, so I have to work in that mindset. Still, I never felt comfortable with it.
>However, I'm not talking about qualia[1]. Qualia, a materialist would say, is an illusion. But, again, going by the author's definition, I'm not sure we can say whether rabbits or flies experience qualia, and qualia is sort of a hard to thing to reason about from the outside of the head. For me, the question of qualia is a much more interesting argument against materialism.
Nobody says qualia are an illusion, in the sense of not really existing. People just say, "Qualia are not metaphysically spooky and immaterial; they're things your brain makes."
Anything that responds to input? So a Roomba or those little robot dinosaur pets? Nginx hooked up to a photosensor?
Also: A materialist view of consciousness does not need to examine from the "outside". After we have figured out what consciousness is, then it's a matter of examining whatever bits of matter there are and seeing if they have the right properties. Of course that requires some serious breakthroughs, ones some folks are convinced aren't possible.
Maybe consciousness fades out as the size of the group gets larger and larger and there's some limit where it can't get conscious anymore (probably related to the speed of sound).
But consider this: you are aware of yourself, but how do you know that you are the only thing occupying your brain? I mean, what if your brain spawns 5 different conscious processes, each of them completely separate and independent from the others? And what you feel as yourself is just one of these 5 processes.
Well, if I never see myself saying or doing something that MY process hasn't decided itself, it's as if those extra processes don't exist for all practical purposes.
Actually I've heard that scientists have proved that your awareness of your brain's decision sort of comes after the decision has already been made by your brain.
Not to mention there's a lot of "subconscious" stuff that happens inside one's mind.
>Actually I've heard that scientists have proved that your awareness of your brain's decision sort of comes after the decision has already been made by your brain.
I've read that too. But philosophically I think people draw the wrong conclusions from this -- like we're puppets and the brain is some autonomous third agent.
My brain and my consiousnless are part of the same thing, so making the decision (in the background) and having it come to awareness a little later, is still the same entity "thinking".
I don't have to think "out aloud" in my brain (consciously) for myself to make a decision: my brain can make it drawing directly and subconsciously from the same memories, sensory inputs, biases etc that I have available when I think.
Neuroscientifically, we already know that the brain does contain multiple parallel streams of processing. "You" is a combination-symbol for all of them, plus the body.
tl;dr the United States may be conscious because consciousness is a purely physical manifestation.
You could skip most of this paper and ponder the deeper question posed by Sam Harris: how do we know anything is conscious? What if stars are conscious. Could we ever tell? Fundamentally we don't know the mechanisms that give rise to consciousness, so in theory anything could be conscious with a complex enough physical system. A country could be consciousness and we'd likely never know. Fun to think about!
I think that to a materialist, the question might not be a hard one at all. The theory may well be that consciousness depends on the strength of the connections between the components of the collective "mind", and that some range of connection power (i.e. the ability of one component to influence others) is required. You then measure the connection strength among people in the US, and it may turn out be well below the threshold require for the thing that will be defined as consciousness. After all, solids and fluids behave differently enough for the difference to be considered qualitative, even though there is just a quantitative difference in the strength of the connections between their components. So there may be nothing special about our brains except for the strength of the connection between neurons which lies within certain "magic" bounds. Collectives with connection strength either below or above those bounds may display some behaviors similar to intelligence or "consciousness" but not quite.
Other possible, but similar measures can be the number of components (much larger in the brain than in the US or an ant colony), or the average number of connections each element has. In any case the result may be the same: a quantitative difference may lead to a qualitatively different result.
On a related note, I'm of the opinion that the AI's have already taken over humanity, IE: Corporations.
As we develop AI, I see it naturally augmenting the Corporation's ability to make decisions, eventually supplanting humans in the high-order strategic planning.
Once that happens (and it will!).... hope for the best?
If the argument of this paper is correct, I think it would be evidence against materialism. If you find a position has an implausible consequence, that is evidence against that position, especially if its competitors (e.g. dualism, idealism) lack that consequence.
I don't think panpsychism is common in dualism - the vast majority of dualists are not panpsychists. The argument of the paper is that "the United States is conscious" is a probable consequence of materialism. For your point to succeed, we'd need an argument that panpsychism is a probable consequence of dualism (or idealism, or so on), as opposed to merely a possibility compatible with it. Since materialists need to ground mental concepts in physical ones, they run the risk that their attempts to do so may have a broader scope than they intend (as this paper argues). Dualists don't need to ground the mental in the physical, so they don't run that risk.
This is an old idea, which doesn't make it less interesting. I prefer to think about the Internet possibly being conscious... more interesting because maybe the Internet consciousness is where the singularity will come from.
Kind of cute to care about this, impressive references and all.
Thinking about thinking, troubling as it may be, presumes a supremacy of human consciousness the sensibility thereof is its own determinant.
A waterfall is a pretty analogy for the State. My point being...interesting that it holds together, but that being established, play your part or propose some alternatives.