Nothing of the sort is clear. If you train a GPT-2 instance on a corpus of the philosophy of consciousness which has been rewritten into the first-person singular, and you then ask it a question which it answers with a discourse on "my consciousness", is it conscious? Your argument here says that it is.
You're assuming without evidence that consciousness is, and is only, an epiphenomenon of a sufficiently complex neural net, which you are of course welcome to do. But attempting to cloak that assumption in positive language, as you do here and elsewhere, is a bit of linguistic chicanery that doesn't deserve to pass entirely without comment.
Ok, so is there anything at all that a brain "with consciousness" can do that a brain "without consciousness" can't? If not, your definition of consciousness is entirely meaningless.
And, yes, no one really knows what that means. This is by way of being the whole point: that the question of what consciousness is and means is a matter for philosophy rather than science, as is everything else that can’t, or can’t yet, be precisely enough formulated to be susceptible to scientific inquiry.
(Personally, I take no position on the question, other than that as far as I’m concerned I have consciousness and that’s good enough for me, and I graciously extend the same privilege to all human and many nonhuman animals - dolphins, for instance, out of professional respect for a fellow species of highly successful bastards. I just think it’s fun to poke at unwarranted senses of certainty every now and again.)
I think you are very mistaken about an assumption, that GPT-2 can not experience qualia.
I am not even sure it would need any additional training.
Or I misunderstand what you mean by qualia. But the only other alternative interpretation of a word I can think of is identical to "being a specific physical system", which does not involve consciousness at all.
“Qualia” is a philosophical term of art referring to the subjective experiences that are considered to uniquely define consciousness. A creature without qualia perceives a noxious stimulus; a creature with them feels pain. If, as other commenters would have it, there is something of the “god of the gaps” about consciousness, then qualia are what fill those gaps.
This is miserably unsatisfying, but like I stated previously, consciousness cannot be externally observed with any known method.
> is there anything at all
The "what can it do" refers entirely to an internal characteristic known only to itself (or at least not known to me).
Is that then a meaningless concept? Perhaps. But in that case, I would suggest there simply isn't a "meaningful" definition of consciousness. It's just the Turning Test.
Consciousness has been observed externally many times using a very well known method: a vast number of analogous reactions to a large variety of stimuli. This is the very reason we aren't all solipsists and believe in human consciousness in others.
And that's all that any agreed upon measurements and observations have ever been. Just because you can't (yet) quantify it or model it formally doesn't mean you can't observe it.
Creatures which we’d all agree do not possess consciousness also display “a vast number of analogous reactions to a large variety of stimuli”. What do you imagine it proves about consciousness that we do, too?
I was being succinct. It is not the quantity of reactions that is important, but the type and nature. The quantity is simply what makes it a reliable measure.
The type and nature don’t seem like mattering, either. How does external observation of behavior shed any light on a phenomenon only perceptible through internal experience?
Can you cite a source for this argument? I’d be interested in better understanding it, but your presentation of it has thus far not aided this goal.
There are no phenomena that are only perceptible through internal experience. (A term that you've basically introduced in a True Scotsman fashion.)
Consider a radio antenna. A radio operates by responding to electrons sloshing around in a wire. The radio can 'experience external reality' iff that sloshing behaves analogously to something 'external': another antenna. This kind of analogy is what observation is, and it basically defines what an experience is. (Though in general, it doesn't need to be external: you can easily observe internal state or have a feedback loop.) Two antenna can be said to pick up the same signal only if their responses strongly correlate.
A brain doesn't work any differently, it's just a far more complex antenna that integrates more complex signals from more diverse sources. The only thing you have to do is establish a strong enough correlation (depending on the accuracy you care to assert) and to do that, you need to pick a number of aspects of the system, measure them, and ensure they correlate sufficiently nicely.
Again, this has nothing to do with the brain per se: all scientific measurements operate on the same idea. A large enough number of correlated observations is sufficient to establish whether two phenomena are the same. Humans are so good at doing this implicitly that we don't even question if other people have emotions, thoughts, ideas, or experiences that differ significantly from our own. In fact, human development involves a great deal of social mimikry and exploration which serve to constrain people's behavior to those things which communicate shared experiences particularly strongly. (Conversely, human behavior is not so unpredictable that we can't build an understand of it.)
"Humans are so good at doing this implicitly that we don't even question if other people have emotions, thoughts, ideas, or experiences that differ significantly from our own."
And yet other people often do, to the extent that the concept of "neurotypicality", and its converse, are necessary components of (the closest thing we yet have to) a complete theory of mind.
It's interesting to me that you accuse me of the No True Scotsman fallacy while introducing the concept of some sort of external signals for which a human brain functions exclusively as a receiver. That reads like an attempt to introduce a mind-body duality, but given your prior commentary I doubt that is the case. The closest I can come to making sense of it is that you seem to argue that consciousness consists entirely in experience of, and response to, outside stimuli from other humans and from the environment, with no de novo contribution arising from within the person who experiences a given instance of consciousness.
Considering that this appears to be a sneaky attempt to define the concept of consciousness out of existence altogether, I have to assume I've misunderstood you somehow, because I can't imagine anyone would engage in such chicanery under the color of forthright and intellectually honest discussion. But we're talking so much past one another at this point that I do doubt the use of continuing any further.
There is no such thing as "internal experience" IMHO. You draw the boundary arbitrarily. Me + Wikipedia have a different idea of myself, than me without one.
You're assuming without evidence that consciousness is, and is only, an epiphenomenon of a sufficiently complex neural net, which you are of course welcome to do. But attempting to cloak that assumption in positive language, as you do here and elsewhere, is a bit of linguistic chicanery that doesn't deserve to pass entirely without comment.