Hacker News new | past | comments | ask | show | jobs | submit login

The anti-correlated behaviour of these two networks, and even their default mode vs. attention functions, reminds of the attention schema theory of consciousness [1].

Specifically, the attention schema theory posits that some constant back and forth signal switching between internal and external models of the world results in the illusion of subjective awareness, in an analogous manner to how task switching provides the illusion of parallelism on single-core CPUs.

[1] https://www.frontiersin.org/articles/10.3389/fpsyg.2015.0050...




So psuedo basic for the consciousness algorithm:

  10 look at world
  20 look at my reaction to world
  30 goto 10
Which generates consciousness like frames per second generates motion. Or like the colored lines over this black and white photo generate a color image:

https://twitter.com/SteveStuWill/status/1248000332027715584/...


> 20 look at my reaction to world

who is the "my" you are referring to? you have an 'a priori' conflict.


By "my" I guess he's referring to an internal state that has been built up over previous interactions with the external world? So more like:

  10 receive information from world
  20 do operations on information
  25 update state based on operation result
  30 goto 10


I think it's more like:

10 receive information about the world, body and reward signals

20 evaluate the current situation and possible actions

30 act

40 goto 10

Emotion results in step 20 when we judge situations and actions in the context as good or bad. In this step two subsystems cooperate:

1. a system for fast reaction - works best when there is no time to reason, or when the action is repetitive, or when available information is uncertain

2. a system for slow, reasoning based reaction - works when we can build a mental model and imagine possible outcomes, is especially necessary when we encounter novel situations and have the concepts necessary to reason about it

System 1 is based on instinct and system 2 is learned. They are both essential as they are specialised in different situations. Using system 2 all the time would be too expensive and probably impossible, we need to rely on instinct which in turn relies on evolution to be fine tuned.

Learning happens through the reward signal. We reevaluate situations and actions based on outcomes. Emotion is just a synonym for the value we assign to our current state with regard to our goals and needs.

Our goals include adapting to the environment in order to assure the integrity and necessities of life - the primary goal, then as secondary goals - being part of a social group, learning, mastery, conceiving children, curiosity and a few other instincts. We are born with this goal-program which is in turn evolved.


Yeah, this "perception-action cycle" has been well known and taught in neuroscience for a long time. What's new, at least since I studied it, is the anti-correlated tick-tock of these two key networks that seems to be happening. Amazing how similar to a game engine consciousness seems to be panning out.


> The default mode network (DMN) is an internally directed system that correlates with consciousness of self, and the dorsal attention network (DAT) is an externally directed system that correlates with consciousness of the environment

DMN : self awareness :: DAT : non-self awareness. The distinction is at least partly hard wired. If there's a self, there's a "my".

Another name for consciousness is self-awareness, which requires a self. And what else could a "self" be but a neural construct? This article is a theory of its construction.


I would say the kalman filter is quite appropriate where the state is our model of the world, the 'page flip' as a state update based on observation. Also the same as a Bayesian model update:

> state = kalman(state, observation)


with a kalman filter the model always stays the same and is used to pick which sensor is reporting most accurately. changing the model would change everything, maybe that's why changing your worldview has such an impact on the way you "see" things.


Taking this further, I suppose you could consider persistence of vision [1] as analogous to consciousness.

It's an artifact of the limitations of the system.

[1] https://en.wikipedia.org/wiki/Persistence_of_vision


Persistence effect has to exist in one way or another to get real-time self awareness; if it were not an input alternation, it would exist at the reasoning level.


Except without the frames and colored lines. Lines 10 and 20 don't provide the experiences. They're just behavior. Somehow all the sensations have to be added in when looking at the world and looking at one's reaction to the world.


let experience$ = INKEY$


Calling it an illusion is not interesting. We define consciousness and subjective experience to match the very experience we understand.

There is literally no way for it to be an illusion, the definition itself precludes it. No matter how consciousness and subjective experience are implemented in the hardware of our brains, it is still a concept that we use to describe the experience, and the experience is real no matter what.


> No matter how consciousness and subjective experience are implemented in the hardware of our brains, it is still a concept that we use to describe the experience, and the experience is real no matter what.

What does it mean for something to be "real"? Physics says your car is not actually real. There is no "car" particle or field in the physics ontology, there is no physical experiment we can run to definitively test whether something is a car or is not a car, such that aliens that evolved on another planet would agree perfectly with your assessments. If physics is our best theory of what actually exists, then your car doesn't really exist.

Analogously, this is the crux of the hard problem of consciousness: is the qualitative experience of consciousness actually real, or is it reducible to third-party objective facts, like every other phenomenon we've encountered, and so the irreducible properties it seems to have are actually an illusion that is reducible to non-conscious particles and fields?

Given the way you've phrased your post, that the brain "implements" consciousness, I expect you might agree that such a reduction is ultimately possible. In that case, you too might be an eliminative materialist, which asserts that consciousness does not really exist.

That said, all materialists agree that phenomenal experience requires an explanation, it's just that they assert that explanation will come from neuroscience. Antimaterialists assert that no such explanation is possible.


And yet most people call it a car, consider it real, and at the same time don't see a problem in reducing it to its physical properties.

"is the qualitative experience of consciousness actually real, or is it reducible to third-party objective facts"

Why not both?

The "real" refers to our subjective experience. That there is something that is like to be me. Something that is like to be a bat. And at least under certain definitions, that's what we call conscience. That something I know I experience and that I doubt a computer is experiencing too.

Why would this be incompatible with reducing this experience to third party objective facts? We simply don't know but I don't see why we couldn't.


> > "is the qualitative experience of consciousness actually real, or is it reducible to third-party objective facts"

> Why not both?

Because those are mutually exclusive options. Either something is ontologically fundamental, or it's not. It can't be both.


After posting my comment yesterday, I read some of your other comments and I don't think we actually disagree.

The problem is probably the definition of "real".

I'm not arguing that consciousness is a fundamental property of the universe when I say it is real. The jury is still out but that's not what interests me the most. When I say "real" I'm talking about my subjective experience. It is real in the sense that it is something that I know I experience, regardless of the mechanisms involved.

Ultimately, what I'm interested in is finding out how it arises. Explaining it. Reducing it to its "third-party objective facts" if possible. Being able to look at a machine that mimics us and tell if that machine is experiencing something comparable to what we experience.



Will do, thanks!


Can you visualize a car?


Sure.


Consciousness is basically the only thing we can conclusively say is NOT an illusion, right?


Exactly! The illusion argument (let us call it ArgIllusionCons) that applies to consciousness can be applied with just a few extra steps to everything we perceive, including ArgIllusionCons itself.


Perhaps, but there's no guarantee that it's anything more than a transient state, that lasts at most until you next lose consciousness. The consciousness that you have today may have no relation to that of yesterday or tomorrow, apart from running on the same brain "hardware" and having access to the same memories.


No, you can safely assume that your thoughts exist in the moment that you have them. Consciousness is more than simple thoughts.


I think therefore I am.



I can see how that would lead to an illusion of continuous subjective awareness, but I don't think it supports the notion that subjective awareness is entirely illusory. I think therefore I am, the existence of qualia [1], etc.

[1]: https://en.wikipedia.org/wiki/Qualia


"I think therefore I am" assumes the conclusion. "This is a thought, therefore thoughts exist" is the valid, non-circular version.

The attention schema theory addresses the specific problem of how we apparently infer first-person subjective facts when no such concept exists in physics, the latter of which consists entirely of third-person objective facts. The answer is that we erroneously conclude that the facts we perceive are first-person, but this perception is a sensory trick, similar to an optical illusion.

The question of qualia is larger than this specific question, but subjectivity was probably an important problem to overcome for a materialist explanation of consciousness. Dennett has long held that what we call "consciousness" is very likely a bunch of distinct phenomena that all get muddled together, and the fact that we have started to pick it apart hints suggestively that he was right.


> The answer is that we erroneously conclude that the facts we perceive are first-person, but this perception is a sensory trick, similar to an optical illusion.

I'm extremely skeptical of answers that involve labeling difficult challenges to a theory as "illusions."


> I'm extremely skeptical of answers that involve labeling difficult challenges to a theory as "illusions."

So calling [1] an optical illusion warrants skepticism because it's attempting to dismiss the challenge of having to explain how water can physically break and magically reconstitute pencils? Don't you see the problem with this sort of argument?

The point is that integrating all of our knowledge leaves no room for first person facts. Additionally, every time we've tried to ascribe some unique or magical property to humans or life (like vitalism), we've been flat out wrong. No doubt there are plenty of challenges left to resolve in neuroscience, and no one is claiming that a materialist account of qualia is unnecessary.

[1] http://media.log-in.ru/i/pencilIn_in_water.jpg


> So calling [1] an optical illusion warrants skepticism because it's attempting to dismiss the challenge of having to explain how water can physically break and magically reconstitute pencils? Don't you see the problem with this sort of argument?

Yeah, but that's a bit of a straw man.

The kinds of claims-of-illusion that warrant particular skepticism are the ones that deny fundamental observations in defense of some particular (usually sectarian, for lack of a better word) philosophical perspective.


What makes an observation "fundamental"?


naasking, as I see it, Dennett (in Consciouness Explained) engages in a sleight of hand. He redefines consciouness as "a bunch of distinct phenomena that gets muddle together", but that doesn't touch the mystery of qualia, it tries to just deny that there is anything to explain, owing to the fact (Dennett claims) that the problem is mischaracterized from the start.


And your mind plays sleight of hand all the time, which Dennett clearly establishes in his work. Or do you actually see the physical blind spot that's a fundamental feature of your eyeball?

So why would you trust your direct perception over the mountains of evidence that clearly demonstrates that we can't trust our perceptions?


No literate scientifically minded person disputes your point, but it doesn't address my point. My point is this- qualia as a phenomena exists. Even if I think a red thing is blue, I am still experiencing some color and the experiencing itself- aside from its accuracy- is what needs explaining.

So experience, aka qualia as a phenomena unto itself is in need of explaining, not any particular qualia and not the presence of absence of any correlation between the qualia and objective reality, i.e. the "truthfullness" or "accuracy" of the qualia.


> My point is this- qualia as a phenomena exists. Even if I think a red thing is blue, I am still experiencing some color and the experiencing itself- aside from its accuracy- is what needs explaining

Dennett wouldn't deny that either. He would simply say that we have no reason to think the qualitative experience of our perceptions are anything other than a cognitive trick with a functional purpose. Certainly how this trick works should absolutely be explained, and I don't think any materialist would deny that.


It is what Dennett thinks. Actually, it's what Dennett thinks he thinks, because that idea of Dennett's is inherently non-sensical; it is self-contradictory.

Here's why.

To explain qualia as a "trick" is to void the onotological status of qualia itself. He can't do that. It doesn't matter if it's all an illusion or a trick, it doesn't matter what its ultimate epistemological status is. Qualia is experienced and it's the experience itself, whatever its biological underpinning turns out to be (you can't have sight without eyes), which is relevant.

Yes, all experience could be fallible and illusory but the fact of experience itself cannnot be an illusion.

Experience qua experience is the thing no scientific theory of perception and cognition needs. So why does it exist? In other words, why are we not as not-conscious as rocks and chemical processes and planets and electrical activity, doing all we do, saying all the things we say? It's certainly possible.

Dennett, and I am inferring this I haven't heard him say it, is an ontological positivist. Only those things which the methods of science reveal to exist are "real" and everything else is, as you say, some kind of illusion. Sonunds good. But an illusion (which is some experience whose epistemology we have misconstrued) is not itself an illusion. Its ontgological status as "a thing which does exist" is secure.


> Yes, all experience could be fallible and illusory but the fact of experience itself cannnot be an illusion.

Sure it can, and it remains only to explain how and why this illusion works to fool us into making erroneous statements, like "the fact of experience itself cannnot be an illusion".

> Experience qua experience is the thing no scientific theory of perception and cognition needs. So why does it exist?

It probably doesn't! Although I'm not as convinced as you that qualia are entirely non-functional.

> But an illusion (which is some experience whose epistemology we have misconstrued) is not itself an illusion.

What is an illusion? To my mind, an illusion is a perception or inference thereof that, taken at face value, entails a falsehood. So to call phenomenal consciousness an illusion is to say that the claims inferred from our direct perceptions are false, eg. "I have subjective awareness". There's nothing problematic about this that I can see.


>What is an illusion? To my mind, an illusion is a perception or inference thereof that, taken at face value, entails a falsehood.

But that is not the part of the illusion we're interested in. The part of it we're interested in is the part it shares with all other experiences. It was an experience. Stop. That fact can't be gainsayed.

What you're using to deny this is the epistemological status of the illusion experience. So that's things like "it was caused by brain cells XYZ firing" or "it did not accurately represent reality" or "it did not correspond to anything in reality at all". All those things could be true but they are beside the point being made.

Either one gets this fundamental idea or they don't in my experience (lol).


We're discussing the ontological status of phenomenal experience, so its illusory nature is very much relevant to this question.

No one, not even eliminative materialists, would deny that people have what they believe to be phenomenal experience. See Frankish [1]:

> Does illusionism entail eliminativism about consciousness? Is the illusionist claiming that we are mistaken in thinking we have conscious experiences? It depends on what we mean by ‘conscious experiences’. If we mean experiences with phenomenal properties, then illusionists do indeed deny that such things exist. But if we mean experiences of the kind that philosophers characterize as having phenomenal properties, then illusionists do not deny their existence. They simply offer a different account of their nature, characterizing them as having merely quasi-phenomenal properties. Similarly, illusionists deny the existence of phenomenal consciousness properly so-called, but do not deny the existence of a form of consciousness (perhaps distinct from other kinds, such as access consciousness) which consists in the possession of states with quasi-phenomenal properties and is commonly mischaracterized as phenomenal.

[1] https://nbviewer.jupyter.org/github/k0711/kf_articles/blob/m...


If we can't trust our perceptions, then there is no mountain of evidence to say the mind is playing a trick on us regarding consciousness. That's because the scientific evidence is empirical, which is knowledge based on perception. Dennett's argument risks undermining the foundation for scientific knowledge.


The question of knowledge is indeed tricky. Your specific objection has kind of already been answered by science: you can't trust your senses, so you build instruments to extend your senses into domains you can't sense, you translate that data into sensory data you know is somewhat reliable, and you adhere to a rigourous process of review, replication and integration of all observations into a coherent body knowledge. Eventually, you converge on a reliable, replicable body of knowledge.

And so far, this body of knowledge suggests strongly that we can't trust our perception of consciousness.


> And so far, this body of knowledge suggests strongly that we can't trust our perception of consciousness.

But perception is a part of conscious experience. We don't perceive consciousness independent of things in the world. They go hand in hand. So we know about the world because we have conscious experiences of perceiving the world.

What Dennett and others are trying to argue is that only the qualities of perception which are objective exist, even though those qualities are accompanied by the subjective qualities. So we know the shape of an object by color and feel. If you abstract the shape out and argue the colors and feels aren't real, then what status does our knowledge of the abstract shape have?


goatlover said: "only the qualities of perception which are objective exist,"

If you change "objective" to "scientifically validated" you have defined a position known as "ontological positivism", and I would say Dennett does subscribe to this.

The funny thing is, the school of thought which says consciousness is computation by our brains and any computer properly programmed can be conscious, a school known as "functionalism", itself gives ontological status to a non-corpreal abstract thing, namely computation.

If computation can be abstracted from not just the brain but any substrate at all then it exists. So computation can take place on an AMD chip and a Intel chip and a Turning Tape and anything you'd care to rigged up made out of anything whatsoever so long as it could represent the computation of a Turing Tape.


> If computation can be abstracted from not just the brain but any substrate at all then it exists.

Existence is a tricky as a proposition, as Kant famously argued. Does the following make sense to you: the law of non-contradiction exists.

Computation has similar logical character as other rules of logic. In fact, intuitionism ensures a 1:1 correspondence between the two. So computation is not a "non-corporeal thing" any more than any other form of logic. If you take rules of logic to also be non-corporeal things, well then this "problem" you speak of was present in functionalism from the start, and yet it doesn't seem to trouble anyone.


> and yet it doesn't seem to trouble anyone.

Do mathematical things exist independently of the minds who conceive of them ? The ontological status of abstract things, right? My point is, materialists deny these kinds of things. That's disembodied spiritual bunkum and it has no place in modern thinking.

Then, later in the day they're perfectly happy to deal with things just as abstract and non-corpreal without feeling like they're cheating in any way.

The fact is the philosophy of science has not caught up to the advances in science as any good QM thread here will show.

>Does the law of non-contradiction exist?

The fact that neither of us can answer this (assuming we both agree to what it implies about the world, which actually, heh... I am not totally convinced of, but that's another matter) ... anyway the fact that neither us can answer this in the way you meant it is an interesting fact in the same family of interesting questions as raised in this discussion.

The quarks->atoms->molecules->neurons->brains->experience (consciouness) chain of causality, which is the standard model of reality and has been for a few hundred years now, is broken at both ends by which I mean the descriptive philosophical ideation at both ends is to no one's real satisfaction.


> Do mathematical things exist independently of the minds who conceive of them ? The ontological status of abstract things, right? My point is, materialists deny these kinds of things.

Sure, and they would have to provide some sort of naturalist account for mathematics. There are some proposals for this kicking around.

> is broken at both ends by which I mean the descriptive philosophical ideation at both ends is to no one's real satisfaction.

Indeed, there is no hole-free reduction along the chain you cite, but those holes are continuously shrinking. This is why I consider the special pleading around consciousness a god of the gaps. There are some very interesting puzzles around consciousness, but I think ascribing a special status to consciousness will ultimately be abandoned, just like vitalism.


Platonism has been an ongoing debate for centuries, so the status of numbers and logic do bother some people.


Agreed, but I was referring specifically to it not bothering functionalists.


Science is a set of evolving traditions about how to fix errors and it relies on the consciousness/perception of individual scientists. Consciousness/perception is error-prone but it does seem intimately connected with the correction of error, too, as we strive towards better understanding. Aren't we compelled to trust it in this regard? That things will seem to be more like they really are, including consciousness itself?


>"I think therefore I am" assumes the conclusion.

But the thought is self-referential. The thought is about itself thinking, and so the thought instantiates the sufficient case for a subject. And so there is no question-begging.

>The answer is that we erroneously conclude that the facts we perceive are first-person

But as long as these facts are presented or represented as being first-person, the sufficient case for first-person acquaintance has been established. Whether these first-person facts are ultimately grounded in third-person descriptions or phenomena doesn't make them illusions.


> But the thought is self-referential. The thought is about itself thinking, and so the thought instantiates the sufficient case for a subject.

You just assumed the existence of a subject again. Where is the proof that a thought requires a subject? One that isn't vacuous or doesn't just assume its own conclusion?

> Whether these first-person facts are ultimately grounded in third-person descriptions or phenomena doesn't make them illusions.

It does for the technical purposes of the consciousness debate. The terminology we're using, like "illusion", has a technical meaning for the debate between materialists and antimaterialists, wherein antimaterialists argue that a first-person fact cannot be reduced to third person facts, even in principle.

Obviously even materialists speaking informally would still use first person language and speak normally about their experiences.


>Where is the proof that a thought requires a subject?

I'm not sure what you're asking. If the thought is self-referential, then the subject is inherent in the self-reference of the thought, namely the thought itself. I am not assuming some kind of conscious subjectivity here. Merely that the content of the thought is instantiated in the case of self-reference. If this were not the case, then the thought could not self-reference.

>has a technical meaning for the debate between materialists and antimaterialists

I'm familiar with the usual suspects in this debate (e.g. Dennett, Frankish), and I don't find their usage of illusion particularly "technical". They use it to mean that phenomenal consciousness doesn't exist or isn't real. But its this very usage that I take issue with.


> If the thought is self-referential, then the subject is inherent in the self-reference of the thought, namely the thought itself. [...] If this were not the case, then the thought could not self-reference.

What property of self-reference do you think entails the properties of a "subject"? Does a self-referential term in a programming language also have a subject?

I suspect you would deny they do, so why does a self-referential thought entail a subject when self-reference in other domains does not? Because the only answers I can come up with either amount to special pleading on behalf of thoughts, or admitting a superfluous concept into every self-reference that makes no useful distinctions and conveys no useful properties that I can see.

> They use it to mean that phenomenal consciousness doesn't exist or isn't real.

And what do you think they mean by "isn't real" or "doesn't exist"? Because in this setting, such a claim to me means that phenomenal consciousness is not ontologically fundamental. That's a pretty technical meaning. Dennett certainly didn't mean that phenomenal consciousness doesn't exist as a concept deserving a philosophical or scientific explanation; clearly it does or he wouldn't have written about it.


>so why does a self-referential thought entail a subject when self-reference in other domains does not?

I have a feeling you're taking "subject" to be a far more substantial concept that I am. A subject at its most austere is merely a distinct target of alteration. That is, it is subject to change from external or internal forces. So to have a self-referential thought is to say you are subject to change: from not having a thought to having a thought, thus you are a subject of some sort (the thinking sort). Although a thought itself does not necessarily imply a subject, recognizing one is having a thought (a self-referential thought) entails subject-hood on its bearer.

But to address the specialness of thought: thoughts have content. For example, a thought about an apple intrinsically picks out an apple. The thought somehow contains the semantics of the object of its reference. This is in contrast to, say, a bit that gains meaning from the context in which it is used (e.g. night/day, heads/tails, etc). And so a self-referential thought intrinsically contains the necessary structure to entail subject-hood. Self-reference in other contexts do not have this intrinsic content requirement.

>Because in this setting, such a claim to me means that phenomenal consciousness is not ontologically fundamental.

Frankish is very clear that he is eliminativist about phenomenal consciousness. He is not intending to simply be reductive about phenomenal consciousness, i.e. that it is "real" but reduces to physical structure and dynamics. His claim is that we misrepresent non-phenomenal neural structures as being phenomenal. This can be cast as a sort of reduction as your referring to. But he is clear that this is not what he intends.

As far as Dennett goes, he seems to endorse Frankish's project so I would take him to also be eliminativist about phenomenal consciousness. But reading is own work he's a lot more slippery about whether he's eliminativist or not so I'm not sure where he actually lands.


> But to address the specialness of thought: thoughts have content. For example, a thought about an apple intrinsically picks out an apple. The thought somehow contains the semantics of the object of its reference. This is in contrast to, say, a bit that gains meaning from the context in which it is used (e.g. night/day, heads/tails, etc).

I don't see any distinction beyond the complexity of the information content. That thought about an apple carries an implicit context consisting of eons of evolution in a world governed by natural laws, ultimately bootstrapping a perceptual apparatus that you've used to accumulate relational knowledge throughout your life.

That digital bit you speak of was generated within the context of a program also governed by rules that gives it some meaning within that program, although that context is considerably smaller than a human's thoughts. I honestly can't think of any non-contextual expressions besides axioms, so I don't accept the distinct you're trying to make.

And I'm not assuming anything about what you might mean by "subject". If a thought is like any other datum tied to a context, then I don't see that a subject is necessary, unless we explicitly define "thought" and "subject" to be mutually recursive. I just don't see how adding a "subject" in the context of every self-reference makes any meaningful distinctions, and so it appears entirely superfluous in that context.

Anyway, as a fun exercise, someone had asked in this thread about a formalization of this argument, so I played with expressing the self-reference and existence proof using Haskell using Curry-Howard. This is what I have so far:

    {-# LANGUAGE ExistentialQuantification #-}
    data Exists a = Exists a
    newtype Thought a = AThought (Thought a -> a)

    thisIsAThought :: Thought a -> a
    thisIsAThought this@(AThought x) = x this

    thoughtsExist :: forall a.Exists (Thought a)
    thoughtsExist = Exists (AThought thisIsAThought)
I don't think this quite captures it, but expressive yet sound self-reference is tricky in typed languages.

> He is not intending to simply be reductive about phenomenal consciousness, i.e. that it is "real" but reduces to physical structure and dynamics. His claim is that we misrepresent non-phenomenal neural structures as being phenomenal. This can be cast as a sort of reduction as your referring to. But he is clear that this is not what he intends.

This doesn't read as consistent to me. Frankish's paper [1] might be the best bet for clearing this up. The way he breaks it down seems consistent with what I've been saying, that we seem to perceive certain subjective qualities but that these qualities aren't really there, they're a trick, and so we simply need to explain why we think we have them.

I admit that my simplified analogies don't perfectly capture the nuance between Frankish's "conservative realism" vs. illusionism, but most discussions don't get so detailed.

So this all seems to hinge on the meaning of "real". Phenomenal consciousness is "real" in the sense that it can drive us to talk about phenomenal consciousness, at the very least. But like belief in other supernatural phenomena, it's likely a mistaken conclusion. This seems to be essentially what Frankfish says:

> Does illusionism entail eliminativism about consciousness? Is the illusionist claiming that we are mistaken in thinking we have conscious experiences? It depends on what we mean by ‘conscious experiences’. If we mean experiences with phenomenal properties, then illusionists do indeed deny that such things exist. But if we mean experiences of the kind that philosophers characterize as having phenomenal properties, then illusionists do not deny their existence.

[1] https://nbviewer.jupyter.org/github/k0711/kf_articles/blob/m...


>I don't see any distinction beyond the complexity of the information content. That thought about an apple carries an implicit context consisting of...

Sure, if our measuring stick is information, then there is no difference in kind, merely a difference in complexity. But the complexity difference between the two is worlds apart, thus substantiating the distinction I'm pointing to.

But information is a poor measurement here. The quantity of information in a system tells you how many distinctions can be made using the state of the system. But information doesn't tell you how such distinctions are made and what is ultimately picked out. For something to be intrinsically contentful, it has to intrinsically pick out the intended target of reference, not merely be the source of entropy from which another process picks out a target.

So in a structure that has intrinsic content, the process of picking out the targets of reference is inherent as well. This means that structural information about how concepts are related to each other are inherent such that there is a single mapping between the structure as a whole and the universe of concepts. This requires a flexible graph structure such that general relationships can be captured. It's no wonder that the only place we currently find intrinsically contentful structures are brains.

>I honestly can't think of any non-contextual expressions besides axioms, so I don't accept the distinct you're trying to make.

Do the thoughts in your head require external validators to endow them with meaning, or do they intrinsically have meaning owing to their content? If the latter, then that should raise your credence that such non-contextual expressions are possible in principle. But to deny the notion of intrinsic content because you can't currently write one down is short-sighted.

>Phenomenal consciousness is "real" in the sense that it can drive us to talk about phenomenal consciousness... This seems to be essentially what Frankfish says

In the paper you link, Frankish is circumspect about his theory being eliminative about phenomenal consciousness:

>Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist. We might call this eliminativism about phenomenal consciousness. The term is not ideal, however, suggesting as it does that belief in phenomenal consciousness is simply a theoretical error, that rejection of phenomenal realism is part of a wider rejection of folk psychology, and that there is no role at all for talk of phenomenal properties — claims that are not essential to the approach. Another label is ‘irrealism’, but that too has unwanted connotations; illusions themselves are real and may have considerable power. I propose ‘illusionism’ as a more accurate and inclusive name, and I shall refer to the problem of explaining why experiences seem to have phenomenal properties as the illusion problem.

So he seems to accept eliminativist about phenomenal consciousness as accurate, but with unwanted connotations. But later on he takes a more unequivocal stance[1]:

>I do not slide into eliminativism about phenomenal consciousness; I explicitly, vocally, and enthusiastically embrace it! Qualia, phenomenal properties, and their ilk do not exist!

[1] https://twitter.com/keithfrankish/status/1182770161251749890


> Sure, if our measuring stick is information, then there is no difference in kind, merely a difference in complexity. But the complexity difference between the two is worlds apart, thus substantiating the distinction I'm pointing to.

I agree 100% that computational complexity can be used to make a meaningful distinctions. It's not clear that that's the case here though. That is, I agree that the quantity of information is worlds apart, but if the information content is all of the same kind and requires no special changes to the computational model needed to process it, then I don't think the magnitude of information is relevant.

> For something to be intrinsically contentful, it has to intrinsically pick out the intended target of reference, not merely be the source of entropy from which another process picks out a target.

I don't think this distinction is meaningful due to the descriptor "intrinsic". It suggests that an agent's thoughts are somehow divorced from the enviroment that bootstrapped them, that the thoughts originated themselves somehow.

The referent of one of my thoughts is an abstract internal model of that thing that I formed from my sensory memories of it. So if "instrinsically contentful" information is simply information that refers to an internal model generated from sensory data, then this would suggest that even dumb AI-driven non-player-character/NPC in video games have thoughts with instrinsic content, since they act on internal models built from sensing their game environment.

> But to deny the notion of intrinsic content because you can't currently write one down is short-sighted.

Maybe, but I'm not yet convinced that there's real substance to the distinction you're trying to make. I'm all for making a meaningful distinctions, and perhaps "mental states" driven by internal sensory-built models is such a distinction, but I'm not sure "thought" or "information with intrinsic content" are necessarily good labels for it. "Thought" seems like overstepping given the NPC analogy, and per above, "intrinsic" doesn't seem like a good fit either.


hackinthebochs: Your point seems to be that a thing has to exist- have some ontological status, to have thoughts at all, even if that "thing" is only itself a thought or, more vaguely, a temporary convergence of "something or other." When you say "self" or "I" as in, "I think therefore I am" people drag in a whole load of qualities they attribute to a self or an I. I feel like that is what your interlocutor is getting caught on. I understand and take your point.


> we erroneously conclude that the facts we perceive are first-person

this doesn't make sense... the perceiving itself is what makes something first-person, not the object of perception


That's not the technical meaning in the consciousness debate. I invite you to read about Mary's room for an introduction to the distinction I was describing.


Oh okay... I thought you were denying the reality of first-person subjective experience as commonly understood, not a narrowly-defined technical term. That seems more reasonable then (though also less interesting).


It's still pretty interesting! Mary's room thought experiment is pretty short and simple, but it'll make you think pretty hard.


I hadn’t heard of that one before.

As a real human story, it would be fascinating. As an abstract philosophical thought experiment, it seems close to worthless. How is it actionable?

Why bother with a thought experiment, when there are real world examples? The book Fixing My Gaze is a great place to start, written by someone whose eyes were out of alignment as a child and only gained true stereoscopic vision as an adult.


These intuition pumps are useful philosophical tools to test the limits and cohesiveness of a particular idea. Mary's room is intended to challenge materialism because it's very difficult to exlain using materialism how Mary could not learn something new upon seeing red for the first time, even in principle. So it's about the logical cohesiveness of the whole theory. That said, there are plenty of strong refutations of Mary's room, but intuition pumps typically just confirm your own biases, so materialists accept these refutations and antimaterialists are not convinced by them.


intuition pumps typically just confirm your own biases, so materialists accept these refutations and antimaterialists are not convinced by them.

Yeah, I think that’s a good way to describe my complaint about philosophical thought experiments. They’re fun to argue over, but very rarely actually cause anyone to shift their positions, and as far as I can tell have zero practical value.

I think it’s interesting to compare it to e.g. Einstein’s thought experiments around relativity, like photons bouncing between mirrors on a fast-moving train. Those are useful because they help explore the ramifications of some mathematically rigorous physics proposals, and point us towards real world experiments. The underlying postulates are well defined.

Similarly, Einstein again, the EPR paradox is a very strange thought experiment around quantum mechanics that has turned out to be amazingly fruitful (if not in the way he wanted). Again, the underlying assumptions being tested are well-defined.

In contrast, the “Mary’s room” scenario tells us nothing useful and has no connection to the real world, because it’s not actually about the physics or physiology of colour at all. It’s about “qualia” which nobody can agree on how to define in the first place. There’s no starting point for the discussion, so there’s always an escape hatch for the point of view you support.


> In contrast, the “Mary’s room” scenario tells us nothing useful and has no connection to the real world, because it’s not actually about the physics or physiology of colour at all.

But it could be. One materialist response to Mary's room actually disputes the physicality of the entire arrangement. Consider that in order for Mary to be able to answer any question about the optical system of the human brain, she would have to be able to answer nearly every question possible; it requires ungodly amounts of data and computing power from the quantum level and up.

That amount of information in a finite space would collapse into a black hole, per the Bekenstein Bound, so Mary's room is actually incoherent at its core. You can see shade's of this in Dennett's response to Mary's room.


ianmerrick: I never understand this. How is the fact of experience in any way debateable or something which people disagree on? I am not asking about the epistemological details which appear to be a prerequisite to experiences we have, neurons and neurotransmitters and brain networks and the physics of light and phosphorus and the eye etc , I am talking about experience itself. How can it be confusing?

Often when I have this conversation it appears to me that somehow, impossibly, the other side suddenly gets fuzzy on this thing I call "experience".


Well, they don’t call it “the hard problem” for nothing.

I’m reminded of The Hitchhiker’s Guide to the Galaxy, where Deep Thought points out that they can’t begin to answer the ultimate question of Life, the Universe and Everything because they can’t even clearly state what the actual Question is.


  "This is a thought, therefore thoughts exist" is the valid, non-circular version.
Let us cast this argument in formal logic.

What are the axioms? What is the conclusion?

https://isabelle.in.tum.de


Fascinating, an actual idea about consciousness that I havent heard before


Does this mean the experience of pain, sound, color, etc. are illusions?


They are an illusion in precisely the same way your car or your day job are illusions: we don't admit any notion of qualia/cars/day jobs into our ontology of physics, and of qualia must ultimately be explained by appealing only to our ontology.


>the same way your car or your day job are illusions

But what work is calling these things illusions doing for you? That they're not fundamental units of the furniture of the universe doesn't mean they don't exist or play necessary causal or explanatory roles.


> But what work is calling these things illusions doing for you?

It serves to help distinguish that which is reducible to more fundamental ontological entities, from that which is irreducible and thus ontologically fundamental.

I agree that these concepts certainly fulfill useful causal and explanatory roles. Whether they are "necessary" has some room for debate.


How do we know when we have arrived at something that is irreducible?


In a similar manner in which we settle on the primitives of physics theories: parsimony in explaining the available data.


What about "parsimony in explaining the available data" indicates that it is irreducible though?


Every theory posits some axioms. These are irreducible by definition.

The challenge is choosing which axiomatic basis we ought to prefer given our incomplete information. This is answered by induction [1].

[1] https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_induc...


What you're describing is "the best way to do science, so we can make some progress". That's all well and good but what other people are talking about is the nature of reality which, in pursuit of, those crazy people, they are perfectly willing to doubt axioms.

As well they should and as is their right since a set of axioms are effectively ground facts which are selected to make logical reasoning across a ___domain possible, nothing more.

That doesn't make them true in the big sense of True, it makes them expedient, productive of theory, generative, a lot of wonderful things, maybe even strongly implied by all evidence, but not apriori true. They're dubitable.


> That's all well and good but what other people are talking about is the nature of reality which, in pursuit of, those crazy people, they are perfectly willing to doubt axioms.

Solomonoff induction does doubt and change axioms. It's a fundamental part of the whole process in fact.

> That doesn't make them true in the big sense of True, it makes them expedient, productive of theory, generative, a lot of wonderful things, maybe even strongly implied by all evidence, but not apriori true. They're dubitable.

Logic is used to make distinctions. Two theories with differing axiomatic bases will make different distinctions, but if they make the same predictions in all cases, then they are logically the same, ie. there is a fundamental isomorphism between them. In this case, it literally doesn't matter if one is "actually really true", and the other is a mathematical dual of some sort.

For instance, polar and Cartesian coordinates are completely equivalent. A theory cast in one might be easier for us to work with, but even if reality really used the other coordinate system, it quite literally doesn't matter.

In the case when the two theories do differ in their predictions, we should epistemically prefer one over the other, and Solomonoff Induction shows us how to do this rigourously.


Someone, a woman, I forget her name, a physicist has lately questioned the parsimonius assumption or more accurately the whole beauty assumption, meaning, roughly, the most beautiful or parsimonious theory is correct. Just a data point to this conversation, not an argument.

Re: Solomonoff, if you're chucking away unnecessaries, which is what I understand Solomonoff to be doing (I had to be reminded what his theory was truthfully) then that's all well and good, let's chuck. But you still are left with the problem of whether the axioms are true. That's a different thing entirely except to the extent we define true operationally, as having predictive power over the things we understand and know about in the way we understand them.

We moderns are all deeply enmeshed with Scientism which is an ism that says logic and reasoning and the scientific method etc. are the only valid tools for aquiring certain (indubitable) knowledge. What if it's just not true ? Then what ?


So, is this to say we approach the irreducible best with induction?


You can only really "experience" the model your brain makes of the world, not the world directly. You end up assuming the world exists and there's no Descartes' Demon.


Right... we are just colonies of cells which behave in programmed ways to generate predictable responses from other cells in other parts of the colony who have their own jobs to perform in maintaining the homeostasis of the colony. The illusion of self is useful because it ensures that the collection of cells entrusted with executive functions act in the interest of the entire colony by perceiving it as a unified whole.


Another way to put it: the experience is real, but it might be misleading.

A hallucination is an experience of something that doesn't exist, but the experience itself does.


Misleading in some respects, but overall helpful. Like a computer desktop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: