Hacker News new | past | comments | ask | show | jobs | submit login

>Where is the proof that a thought requires a subject?

I'm not sure what you're asking. If the thought is self-referential, then the subject is inherent in the self-reference of the thought, namely the thought itself. I am not assuming some kind of conscious subjectivity here. Merely that the content of the thought is instantiated in the case of self-reference. If this were not the case, then the thought could not self-reference.

>has a technical meaning for the debate between materialists and antimaterialists

I'm familiar with the usual suspects in this debate (e.g. Dennett, Frankish), and I don't find their usage of illusion particularly "technical". They use it to mean that phenomenal consciousness doesn't exist or isn't real. But its this very usage that I take issue with.




> If the thought is self-referential, then the subject is inherent in the self-reference of the thought, namely the thought itself. [...] If this were not the case, then the thought could not self-reference.

What property of self-reference do you think entails the properties of a "subject"? Does a self-referential term in a programming language also have a subject?

I suspect you would deny they do, so why does a self-referential thought entail a subject when self-reference in other domains does not? Because the only answers I can come up with either amount to special pleading on behalf of thoughts, or admitting a superfluous concept into every self-reference that makes no useful distinctions and conveys no useful properties that I can see.

> They use it to mean that phenomenal consciousness doesn't exist or isn't real.

And what do you think they mean by "isn't real" or "doesn't exist"? Because in this setting, such a claim to me means that phenomenal consciousness is not ontologically fundamental. That's a pretty technical meaning. Dennett certainly didn't mean that phenomenal consciousness doesn't exist as a concept deserving a philosophical or scientific explanation; clearly it does or he wouldn't have written about it.


>so why does a self-referential thought entail a subject when self-reference in other domains does not?

I have a feeling you're taking "subject" to be a far more substantial concept that I am. A subject at its most austere is merely a distinct target of alteration. That is, it is subject to change from external or internal forces. So to have a self-referential thought is to say you are subject to change: from not having a thought to having a thought, thus you are a subject of some sort (the thinking sort). Although a thought itself does not necessarily imply a subject, recognizing one is having a thought (a self-referential thought) entails subject-hood on its bearer.

But to address the specialness of thought: thoughts have content. For example, a thought about an apple intrinsically picks out an apple. The thought somehow contains the semantics of the object of its reference. This is in contrast to, say, a bit that gains meaning from the context in which it is used (e.g. night/day, heads/tails, etc). And so a self-referential thought intrinsically contains the necessary structure to entail subject-hood. Self-reference in other contexts do not have this intrinsic content requirement.

>Because in this setting, such a claim to me means that phenomenal consciousness is not ontologically fundamental.

Frankish is very clear that he is eliminativist about phenomenal consciousness. He is not intending to simply be reductive about phenomenal consciousness, i.e. that it is "real" but reduces to physical structure and dynamics. His claim is that we misrepresent non-phenomenal neural structures as being phenomenal. This can be cast as a sort of reduction as your referring to. But he is clear that this is not what he intends.

As far as Dennett goes, he seems to endorse Frankish's project so I would take him to also be eliminativist about phenomenal consciousness. But reading is own work he's a lot more slippery about whether he's eliminativist or not so I'm not sure where he actually lands.


> But to address the specialness of thought: thoughts have content. For example, a thought about an apple intrinsically picks out an apple. The thought somehow contains the semantics of the object of its reference. This is in contrast to, say, a bit that gains meaning from the context in which it is used (e.g. night/day, heads/tails, etc).

I don't see any distinction beyond the complexity of the information content. That thought about an apple carries an implicit context consisting of eons of evolution in a world governed by natural laws, ultimately bootstrapping a perceptual apparatus that you've used to accumulate relational knowledge throughout your life.

That digital bit you speak of was generated within the context of a program also governed by rules that gives it some meaning within that program, although that context is considerably smaller than a human's thoughts. I honestly can't think of any non-contextual expressions besides axioms, so I don't accept the distinct you're trying to make.

And I'm not assuming anything about what you might mean by "subject". If a thought is like any other datum tied to a context, then I don't see that a subject is necessary, unless we explicitly define "thought" and "subject" to be mutually recursive. I just don't see how adding a "subject" in the context of every self-reference makes any meaningful distinctions, and so it appears entirely superfluous in that context.

Anyway, as a fun exercise, someone had asked in this thread about a formalization of this argument, so I played with expressing the self-reference and existence proof using Haskell using Curry-Howard. This is what I have so far:

    {-# LANGUAGE ExistentialQuantification #-}
    data Exists a = Exists a
    newtype Thought a = AThought (Thought a -> a)

    thisIsAThought :: Thought a -> a
    thisIsAThought this@(AThought x) = x this

    thoughtsExist :: forall a.Exists (Thought a)
    thoughtsExist = Exists (AThought thisIsAThought)
I don't think this quite captures it, but expressive yet sound self-reference is tricky in typed languages.

> He is not intending to simply be reductive about phenomenal consciousness, i.e. that it is "real" but reduces to physical structure and dynamics. His claim is that we misrepresent non-phenomenal neural structures as being phenomenal. This can be cast as a sort of reduction as your referring to. But he is clear that this is not what he intends.

This doesn't read as consistent to me. Frankish's paper [1] might be the best bet for clearing this up. The way he breaks it down seems consistent with what I've been saying, that we seem to perceive certain subjective qualities but that these qualities aren't really there, they're a trick, and so we simply need to explain why we think we have them.

I admit that my simplified analogies don't perfectly capture the nuance between Frankish's "conservative realism" vs. illusionism, but most discussions don't get so detailed.

So this all seems to hinge on the meaning of "real". Phenomenal consciousness is "real" in the sense that it can drive us to talk about phenomenal consciousness, at the very least. But like belief in other supernatural phenomena, it's likely a mistaken conclusion. This seems to be essentially what Frankfish says:

> Does illusionism entail eliminativism about consciousness? Is the illusionist claiming that we are mistaken in thinking we have conscious experiences? It depends on what we mean by ‘conscious experiences’. If we mean experiences with phenomenal properties, then illusionists do indeed deny that such things exist. But if we mean experiences of the kind that philosophers characterize as having phenomenal properties, then illusionists do not deny their existence.

[1] https://nbviewer.jupyter.org/github/k0711/kf_articles/blob/m...


>I don't see any distinction beyond the complexity of the information content. That thought about an apple carries an implicit context consisting of...

Sure, if our measuring stick is information, then there is no difference in kind, merely a difference in complexity. But the complexity difference between the two is worlds apart, thus substantiating the distinction I'm pointing to.

But information is a poor measurement here. The quantity of information in a system tells you how many distinctions can be made using the state of the system. But information doesn't tell you how such distinctions are made and what is ultimately picked out. For something to be intrinsically contentful, it has to intrinsically pick out the intended target of reference, not merely be the source of entropy from which another process picks out a target.

So in a structure that has intrinsic content, the process of picking out the targets of reference is inherent as well. This means that structural information about how concepts are related to each other are inherent such that there is a single mapping between the structure as a whole and the universe of concepts. This requires a flexible graph structure such that general relationships can be captured. It's no wonder that the only place we currently find intrinsically contentful structures are brains.

>I honestly can't think of any non-contextual expressions besides axioms, so I don't accept the distinct you're trying to make.

Do the thoughts in your head require external validators to endow them with meaning, or do they intrinsically have meaning owing to their content? If the latter, then that should raise your credence that such non-contextual expressions are possible in principle. But to deny the notion of intrinsic content because you can't currently write one down is short-sighted.

>Phenomenal consciousness is "real" in the sense that it can drive us to talk about phenomenal consciousness... This seems to be essentially what Frankfish says

In the paper you link, Frankish is circumspect about his theory being eliminative about phenomenal consciousness:

>Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist. We might call this eliminativism about phenomenal consciousness. The term is not ideal, however, suggesting as it does that belief in phenomenal consciousness is simply a theoretical error, that rejection of phenomenal realism is part of a wider rejection of folk psychology, and that there is no role at all for talk of phenomenal properties — claims that are not essential to the approach. Another label is ‘irrealism’, but that too has unwanted connotations; illusions themselves are real and may have considerable power. I propose ‘illusionism’ as a more accurate and inclusive name, and I shall refer to the problem of explaining why experiences seem to have phenomenal properties as the illusion problem.

So he seems to accept eliminativist about phenomenal consciousness as accurate, but with unwanted connotations. But later on he takes a more unequivocal stance[1]:

>I do not slide into eliminativism about phenomenal consciousness; I explicitly, vocally, and enthusiastically embrace it! Qualia, phenomenal properties, and their ilk do not exist!

[1] https://twitter.com/keithfrankish/status/1182770161251749890


> Sure, if our measuring stick is information, then there is no difference in kind, merely a difference in complexity. But the complexity difference between the two is worlds apart, thus substantiating the distinction I'm pointing to.

I agree 100% that computational complexity can be used to make a meaningful distinctions. It's not clear that that's the case here though. That is, I agree that the quantity of information is worlds apart, but if the information content is all of the same kind and requires no special changes to the computational model needed to process it, then I don't think the magnitude of information is relevant.

> For something to be intrinsically contentful, it has to intrinsically pick out the intended target of reference, not merely be the source of entropy from which another process picks out a target.

I don't think this distinction is meaningful due to the descriptor "intrinsic". It suggests that an agent's thoughts are somehow divorced from the enviroment that bootstrapped them, that the thoughts originated themselves somehow.

The referent of one of my thoughts is an abstract internal model of that thing that I formed from my sensory memories of it. So if "instrinsically contentful" information is simply information that refers to an internal model generated from sensory data, then this would suggest that even dumb AI-driven non-player-character/NPC in video games have thoughts with instrinsic content, since they act on internal models built from sensing their game environment.

> But to deny the notion of intrinsic content because you can't currently write one down is short-sighted.

Maybe, but I'm not yet convinced that there's real substance to the distinction you're trying to make. I'm all for making a meaningful distinctions, and perhaps "mental states" driven by internal sensory-built models is such a distinction, but I'm not sure "thought" or "information with intrinsic content" are necessarily good labels for it. "Thought" seems like overstepping given the NPC analogy, and per above, "intrinsic" doesn't seem like a good fit either.


hackinthebochs: Your point seems to be that a thing has to exist- have some ontological status, to have thoughts at all, even if that "thing" is only itself a thought or, more vaguely, a temporary convergence of "something or other." When you say "self" or "I" as in, "I think therefore I am" people drag in a whole load of qualities they attribute to a self or an I. I feel like that is what your interlocutor is getting caught on. I understand and take your point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: