Hacker News new | past | comments | ask | show | jobs | submit login
Decades-long bet on consciousness ends (nature.com)
158 points by mellosouls on June 24, 2023 | hide | past | favorite | 295 comments



Im a grad student in philosophy, and this one unfortunately risks perpetuating the annoying myth that philosophers are somehow in competition with the natural sciences — usually believed by people who have literally never taken any courses in philosophy. Quine said that philosophical inquiry is continuous with empirical inquiry, but I think it’d be fine to say that it’s just complementary.


My philosophy professor had a multi-volume (non-English-language) textbook called “Introduction to the science of philosophy”, so, uh, I dunno.

Claiming it’s complementary is also not benign—it brings along the burden of pointing out some holes in the scientific enterprise, a general conception of how to fill them, and at least some practical success in doing so. I don’t believe that to be impossible and have some things I could suggest there, but I also carry the common scepticism caused by philosophers not getting literally anything right about the 20th-century-physics picture of the world right ahead of time, so I don’t believe it trivial either.

(Yeah, quantum computing and microscopic, low-energy quantum physics in general is kind of an exception to my provocative assertion above, though one could argue that the people who started that were from a counterculture of physics first and philosophers only as a consequence of that. I don’t really want to throw shade on philosophers here, only to express my bewilderment that people who made it their life’s work to speculate what the stuff of the world could be got it so wrong so many times. People in other fields also got it all wrong, of course, but then they didn’t claim to be particularly serious about their speculation.)


I’m not sure what your professor writing some book with a vague title is meant to show. Am I supposed to google this to find the thesis?

Yes, there is a sub area called philosophy of science, and many people in that area are trained in philosophy and science. But I’m unsure why you think philosophers are supposed to be getting empirical facts about physics right ahead of the physicists. That’s not their job.


I mean, I gotta ask now, right?

What is their job then?


sounds like you don’t know what philosophy is and why it’s dope, so rather than me try to explain in a comment on hackernews, I would say try reading Plato


Believe it or not, I'm going through my third read-through of his entire works right now (yes, even Cratylus).

But the question still stands: What is the job?


> My philosophy professor had a multi-volume (non-English-language) textbook called “Introduction to the science of philosophy”, so, uh, I dunno.

I too am not sure what you are implying there. The enterprise, assuming it had some merit, sounds like evidence of complementarity?

> it brings along the burden of pointing out some holes in the scientific enterprise, a general conception of how to fill them, and at least some practical success in doing so.

Well, yeah, all of those are a given. The rest of your comment goes more toward philosophers speculating on aspects of the material world (i.e. the realm of scientific inquiry), but there is a lot more to philosophy. The holes in the scientific enterprise have long been well delineated, which isn't a criticism, but for example we can't derive human values solely from science [1].

Also, philosophy underpins science. Whenever a hypothesis is tested, there is are philosophically-grounded assumptions being made. The epistemological implications for any given scientific finding depend on the underlying philosophical framework being assumed.

[1] https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem


I'm not sure "underpins" is the right word here.

A theory of ants wouldn't "underpin" the behavior of ants. If the ants behave differently than the theory, then the theory is wrong and should be changed. The ant theory is something in the minds of outside observers, not the ants, and ant behavior doesn't rely on it. The dependency goes the other way: the observers modify their theory to better describe the ants. The ants would exist without the theory, but theory would be pointless if there were no ants.

Is a philosophy of science similar? Or does it have a practical effect on scientific work?

Unlike ants, scientists can learn a philosophy of science, and perhaps believe it. Does this affect their work?

One reason it might not affect their work in practice is that they didn't learn that particular philosophy. Also, perhaps different scientists might learn different philosophies without practical effect.


Science is not an observable thing in itself like ant behavior is. Science (especially the scientific method) is a framework/pattern of thought that can be used to state/observe things about the universe.

Why that pattern of thought is correct in stating anything is absolutely part of philosophy.


I'm under the impression that science is what scientists do, which does seem to be observable?


Not exactly. Scientists are labelled scientists based on the methodologies they use and the fields they study in (which we call science).

When a person practices science, they are called a scientist. When a scientist exhibits a behavior, that behavior is not necessarily called science.


That's a good clarification. However, exactly which methodologies a scientist uses doesn't seem like it makes a difference as to whether it counts as science, so long as it's still in the general spirit of the thing? This is decided culturally.

Also, it seems like there's more to understanding methodology than deciding what counts as science?

To go back to the ant analogy, people had fuzzy ideas about what an ant looks like. Some people might have called other bugs ants even though today we don't. This later led to more precise definitions under scientific taxonomy, where some species are scientifically classified as ants. But there's a lot more to understanding ant behavior than deciding what counts as an ant.

(Also, the definition of what an ant is co-evolved with scientific understanding of ants. Taxonomy existed before the theory of evolution and taxonomies were refined with genetic testing.)


The heart of the science profession is the scientific method (just like the heart of the firefighting profession is fighting fires), but there are many other activities that scientists and firefighters perform that are not science or firefighting, such as writing grant proposals or doing maintenance on firetrucks.

Ants (today) are classified differently from scientists and firefighters. They are not defined based on a specific thing they do, like "anting"; rather, they are defined based on what they are, and their behavior is irrelevant to their classification.

Historically, animal categories were defined much more like professions. A fish was something that primarily swims; a bird was something that primarily flies; a worm was something that primarily slithers; a beast (the category that ants fell into) was something that primarily crawls. Even concepts like "animal" and "plant" were defined this way: animals are animated, while plants are planted in place. There was a lot of debate on how to categorize lifeforms that exhibited less-than-crytal-clear modes of locomotion, just as there is debate today on whether a given person is actually a scientist or not (do they do real science or pseudoscience? do they do a lot of science or is too little science for it to count? etc.).

This, of course, is radically different from the way we classify biological lifeforms now, although there are a few odd historical holdovers (like "fish", which is a catch-all term for aquatic vertebrates without terrestrial ancestors, even though some of them are more closely related to land animals than they are to other fish).


Underpins is the correct word. There is no comparison of observations of any behaviour to a theory or hypothesis about any behaviour outside of an epistemological framework that makes certain assumptions, even if it is unwittingly. Neither those assumptions nor the framework itself can be derived from science itself.

> Is a philosophy of science similar? Or does it have a practical effect on scientific work?

Absolutely, what a p value is depends on your philosophy of science. Whether your statistical analysis even involves p values also depends on it.

> Also, perhaps different scientists might learn different philosophies without practical effect.

Yes, if you stumble upon some simple causal relationship of such massive effect size it is undeniably present beyond a reasonable doubt, it may not matter if a Bayesian or frequentist practitioner came across it. However it certainly can matter what framework evidence gets analysed, considered and aggregated when the observable data themselves are essentially the same.


> There is no comparison of observations [...] to a theory or hypothesis [....] outside of an epistemological framework that makes certain assumptions, even if it is unwittingly.

Okay, let's assume animals (such as scientists) observe things, record them, and react to them without an explicit theory in mind, but there's an implicit epistemological framework that describes how they behave.

It seems like you still need to build your epistemology to match the animals' behaviors, or it's not the one they use? When scientists do math, you need to observe how they actually use math. How do they actually set up and run an experiment or write a paper? It might be different than you imagine?

This is what David Chapman calls the "ethnomethodological flip" [1].

Scientists also might use math differently from how they claim they use it in a formal paper, which doesn't include all the blind alleys and mistakes. A scientific paper is a cleaned-up just-so story.

A fun example of ethnomethodology is studying exactly how a scientist follows the formal procedure for doing a PCR test, including small mistakes that they don't explain and you might not even notice in the demonstration video unless you watch it very carefully, multiple times. [2]

It seems like a very cool thing to do that's rarely done. It might help for coming up with better philosophy?

[1] https://metarationality.com/ethnomethodological-flip [2] https://metarationality.com/rational-pcr


>Also, philosophy underpins science. Whenever a hypothesis is tested, there is are philosophically-grounded assumptions being made. The epistemological implications for any given scientific finding depend on the underlying philosophical framework being assumed.

P(A|B) = P(B|A)P(A)/P(B). To the extent that philosophy underpins science, it does so because scientists are bad at math.


Your comment appears to imply Bayes theorem is all you need for a scientific framework and outside of that is just inadequate mathematical know-how to deploy it. I find it amazing really, a quintessential straight-from-the-summit [1] HN comment for a couple reasons:

First, it appears to imply that such a probabilistic inductive approach to science would be free of any philosophical baggage or assumptions, when deploying Bayesianism requires an interpretation about what a probability itself is. Don't take it from me though, perhaps from Andrew Gelman, the guy who wrote the book on Bayesian data analysis [2,3].

Then, wrt the charge that those who do not use such an inductive approach (or outright rejected it, e.g. in favour of falsificationism) are bad at math. Which would include the statisticians who developed the null hypothesis significance testing framework that is still pretty dominant in science today: Jerzy Neyman, Egon Pearson, Ronald Fisher (who literally coined the term 'Bayesian') etc. There's a lot of criticism worth making about Fisher, but I'm not sure if anyone has called the guy that developed linear discriminant analysis bad at math before.

[1] https://www.smbc-comics.com/comic/2011-12-28

[2] Bayesian Data Analysis http://www.stat.columbia.edu/~gelman/book/

[3] Philosophy and the practice of Bayesian statistics http://www.stat.columbia.edu/~gelman/research/published/phil...


>First, it appears to imply that such a probabilistic inductive approach to science would be free of any philosophical baggage or assumptions

I'm arguing that philosophical baggage is irrelevant to the current practice of science, because the overwhelming majority of published papers have serious and obvious methodological deficiencies that we have collectively agreed to ignore. Science as practised today is a desperate struggle to demonstrate something (p ≤ 0.05) by any means necessary. Established statistical methods have become a means to conceal rather than illuminate. This isn't the fault of individual working scientists, but the fault of the basic information architecture of science and a ritualistic, cargo-cult approach to understanding data. Bayes theorem probably isn't all we need, but it is all we need to spark a scientific renaissance if only we would use the damned thing.


Something to keep in mind is that science is not only practiced in cutthroat academic institutions. Arguably most of the science being practiced today is happening in R&D departments across the world, where there is a strong financial incentive to beware of false positives due to the expense of policy changes.

I worked in private agricultural research for many years, and the issues you describe here did not apply to us. A single instance of statistical significance was insufficient to make any assumptions or adjust any policies. Of course we would present those results if we got them, but nobody was getting excited about it until we got the same results consistently several more times, in several different locations, with several different research teams. Being the first to get a positive result was no more meaningful that being the 10th to get a positive result. Promotions were based on your ability to design and conduct solid research, regardless of the outcome.


I hold a master in philosophy and it was my first big love.

Over the years I came however to a conclusion opposite to yours.

> Also, philosophy underpins science. Whenever a hypothesis is tested, there is are philosophically-grounded assumptions being made. The epistemological implications for any given scientific finding depend on the underlying philosophical framework being assumed.

I think that is not true. Science does not get its merit from philosophical underpinnings, but from working in practice.

Methods that work to generate and test knowledge. Math is the precise language needed to speak about these methods and the knowledge. Also because it works. Look at the achievements of science. That is how you get convinced it gives us a grip on reality. Hic Rhodos, hic salta.

My working assumption at this time is that philosophy has no such methods. We are not further than when Kant said that no fighter ever won and could stand his ground in metaphysics on any topic.

The reason might be that philosophy is actually practically mostly irrelevant. I have not seen one undisputed statement of philosophy. And so it can neither test its statements, nor let others see their validity.

I concede it is practically relevant in another sense: world views have taken grip of groups and still do, and influence history one way or the other. But that seems at least to me more a social-anthropological phenomenon like support and resistance in trading.


> The reason might be that philosophy is actually practically mostly irrelevant. I have not seen one undisputed statement of philosophy. And so it can neither test its statements, nor let others see their validity.

This is the core confusion I think. I find philosophy very relevant for the way I reason and solve problems and evaluate arguments, and in this sense philosophy is powerfully practical. But it’s true that for any given claim, there is always the possibility of taking the opposite position. This lack of final, case-closed consensus doesn’t mean that philosophers individually haven’t converged on true beliefs or haven’t made progress. It’s just that unlike mathematical truths, we can write out the proofs of our arguments, but there’s always someone who disagrees about one of the starting assumptions. So then civilians who haven’t heard of people like Parfit think to themselves, wow 2k years and you can’t tell me anything about ethics or logic or epistemology — it’s like bro just read the literature


I can't say "been there, done that", because I know little about how you came to the position you hold. And I've had my share of burying hypotheses I've held for long times, so chances are I'm wrong.

But I held similar views. What moved me away from these views was the experience that in science you have methods that will let you see with high probability when your thesis is wrong.

Philosophy does not have such methods. You cannot only take the opposite claim for almost anything, but in my experience that claim has actually been taken by another philosopher for almost any topic.

The current consensus is in my experience lead by the people with the loudest megaphone. It's not the best theory given the things that really happen.

No philosophy (in the sense of actual writings of a philosopher) was causally involved in bringing the first astronaut to the moon; in building the first pacemaker - I would argue in none of anything where you could say: if you can do this with it, its probably on to something. Its methods seem to work.

As said, I cannot judge your thinking in any way, but this led me to question that philosophy is not practically relevant to me. How can I even judge if it works? And if I can't, is this not blind trust? Like an imaginary screwdriver for imaginary screws.


> I think that is not true. Science does not get its merit from philosophical underpinnings, but from working in practice.

In the day-to-day practice of science i.e. when empirical inquiry 'works', there certainly are underpinning philosophical assumptions, whether or not they are reconsidered or appreciated with every experiment. The implicit in the act of hypothesis testing, some variant of which most scientists in day-to-day practice, are assumptions about the nature of probabilities, inference etc. The NHST framework that is typically used came about after extensive battles over philosophical considerations that apply to significance/hypothesis testing between Neyman and Pearson vs Fisher. The fact that I never write that a hypothesis is (as a research biologist) is 'proven', but some variant that of it having withstood an attempt at falsification, is loaded with Popperian critical rationalism.

> Math is the precise language needed to speak about these methods and the knowledge.

Except math can't map directly onto reality, or data generating processes that are studied, unless you are presupposing some kind of logical positivism (and I doubt you are). We need probabilities, statistics, and frameworks to map all of this uncertainty, and they must be underpinned by some sort of philosophical assumptions that can't be derived from science itself.

> Look at the achievements of science. That is how you get convinced it gives us a grip on reality.

But that in and of itself is a philosophy of science, one of instrumentalism. However, it only extends to whether science can be useful, but not whether it is accurately describing reality or is true.


Thank you for your thoughtful comment.

You have many good points.

Let me just say that Popper has been philosophically criticized to the point that some say it is a dead horse. Why are we still using this mixture of Fisherian and Neymar-Pearson hypothesis testing (that is if we don't use bayesian methods)? Because it practically works well, not because Popper was right or found a deep philosophical truth.

These methods just generate more often than not knowledge, as we can judge from the consequences.

I argue that nobody cares if the assumptions we put into the frameworks are philosophical true - they are possibilities, and we try some out. So far we seem to be doing pretty well, no matter what philosophers say about the truth of these assumptions.

I also think bending philosophy to apply to practical advice like "use the tool that works" will not leave much to the notion of philosophy.

But it's not that I have a fixed metaphysical position here. I really only use the tool that was most promising in the past for the task at hand. Never needed philosophy.


> I also think bending philosophy to apply to practical advice like "use the tool that works" will not leave much to the notion of philosophy.

I reckon maybe there's something to this. One thing that comes to mind though is: Granting the fact the NHST is now broadly used by practitioners without knowledge of its background simply because it works, I am not sure that necessarily indicates the background isn't important, as my ignorance of my monitor's inner workings does not mean that electricity is not important.


My education and work history are in the biological sciences (although I have admittedly recently made a career change).

To my mind, science (formerly known known as natural philosophy) is a subset of philosophy. However, it is a subset that has grown to dwarf the other subsets and is given separate, special attention. You do not learn much, if any, science in a philosophy degree because philosophy degrees now focus on philosophies that have not been spun off into separate degrees/fields.

Within the sciences, you do learn philosophy (or at least it factored highly in my undergraduate degree), but it's not about Aristotle or Nietzsche. It's about the assumptions and logic underpinning the scientific method, statistical analysis, etc.

My introductory classes, at least, covered a number of arguments and assumptions that (to my ear) are very much questions of philosophy. For example, scientific inquiry is dependent on the assumption that the laws of the universe are consistent across time, meaning that experiments performed now can nonetheless offer insight into past and future phenomena.


Here’s one way it’s relevant: universities are structured according to a philosophical opinion about the nature of the universe (or at least the nature of knowledge about the universe). That set of decisions in turn steers, at a very fundamental level, the path and velocity of scientific inquiry.


I fail to see what you mean exactly. Could you explain?

Specifically why the opinion is philosophical and not just some historic-pragmatic pattern matching and grouping?


Did you get much exposure to the philosophy of science during your masters? I imagine the answer would be yes, but I am surprised that concept doesn't ring a bell as it sounds quite similar to what Kuhn describes in The Structure of Scientific Revolutions, although not necessarily with universities as the institutions upholding scientific paradigms


> I also carry the common scepticism caused by philosophers not getting literally anything right about the 20th-century-physics picture of the world right ahead of time

Well, Democritus did. But nobody paid much attention. Which is the fundamental problem with philosophy - absent objective criteria of validity, it becomes a popularity contest.


The translation of Philosophy is: The Love Of Wisdom.

Our natural sciences build on the foundation those greeks laid.

Pythagoras?

Math and Music Science.

Aristotle?

> His writings cover a broad range of subjects spanning the natural sciences, philosophy, linguistics, economics, politics, psychology and the arts. (Wikipedia)

Plato?

Religion, science, human nature, love, sexuality, ethics, the idea of the soul, ethics, politics, aesthetics poetry and art. (Wikipedia combining two articles in two languages)

And many more.

Luckily the muslims preserved their writings, when the christians tried to destroy those "pagan" scribbles.


That's quite a big stretch saying that. Christians did keep old greek writing during middle age. And muslim did Destroy as rich Persian knowledge.


Reading this reminds me of my first work at a big lab in the field of cognitive sciences. Basic we were tracing the source of visual awareness which is a part of consciousness. The goal is to find identify the tiniest bit leads to awareness of visual perception of objects, is it bottom-up like pixels to shapes to boom! It's a circle! or it's a ring! Or is it top down, I feel that there's a hole, and then figure out what that about. The academic debates are like 30 some years long about this, but pixels take the leads because it is easier to implement with math, you can get a computer to "understand" moderately complex shapes, or identify the owners of many many faces. Anyway it ties at the understanding of topology of images, could a computer understand a shape however its properties changes? In motion pictures, is a movement considered to be an appearance of new object with "destruction" of old one, or just a move. How do you define that in the language of mathematics that both covers machine and human vision. It was some joyful years of working with the scientists but I was too under-educated to even remember how have they put computability and topology in the same work.


Is it possible consciousness will not be ever explained by physical sciences because it is exactly outside of the ___domain of what science was invented to explain?

Physical science takes on as axioms that the laws of physics are uniform everywhere and that there is no preferred observer. The experience of consciousness is exactly opposite to those two things. It very much is the consequence of you being a preferred observer and experiencing the physics around you in a way that physics assumes no thing can.


But it's so fun watching people spend time and money chasing their shadow. And considering that materialism is still the academic consensus on consciousness by a wide margin, it might be quite the spectacle for still many years to come.

Though, there's a tiny, but growing, idealist movement, where all the fun is being had. Last year I was acquainted with the works of Don Hoffman and Bernardo Kastrup. To anyone interested in a rational and difficult to refute discourse on the proposition that consciousness, not space-time (i.e. matter), is fundamental, I suggest as an introduction their respective interviews with Zubin Damania. Dense in content, yet made accessible. One of the most transformative rabbit holes I've ever gotten into.

Here's the red pill :)

https://www.youtube.com/watch?v=dd6CQCbk2ro

https://www.youtube.com/watch?v=BZWp0bnMBbM


This will always fail because the premise is wrong. Consciousness does not arise in the brain. Brain activity correlates to impressions in consciousness, but consciousness is not a “thing”. Consciousness is equivalent to identity. You are that which sees all this that you see. Consciousness IS you. You don’t arise in your own brain.


This is nonsense. You don't lose your identity when you fall asleep. Consciousness is clearly a thing that you can have or not have.

A brain can exist without it being conscious.


You don't "lose" consciousness when you fall asleep. Sleep is not the absence of consciousness, but the consciousness of absence. What's missing is what you think of as your "mind". If that's not the case, then ask yourself who is it that is sleeping? What registers the coming and going of the mind? That's you, the consciousness. If consciousness was going away, you would never know that you slept.


Have you ever met someone who recently had a severe stroke or suffers dementia? Their sense of self can be completely lost by such trauma. Sure they still have identity in the memories of others. But I'd say one must distinguish identity from Self.


You don't lose your consciousness when you fall asleep. You time travel to the point where you wake up.


As someone who has some oddities around sleep and consciousness (sleep paralysis, lucid dreaming, self-awareness during the process of falling asleep, and a couple of other similar experiences that are extremely difficult to describe), which I suspect may caused by undiagnosed narcolepsy, it is clear to me that sleep is a much more complex process than that and does not necessarily involve loss of consciousness.

Indeed, even people with normal, healthy sleep patterns still dream during the REM portion of sleep, even if they do not remember it later. Consciousness is not an on/off thing.


That’s not true though. If that were the case you wouldn’t be able to sleep off an argument.

Sleep clears the system and reboots it. That’s why all your short term memory gets wiped too if it wasn’t written to disk.


That's a distinction without a difference.


“Given my chosen definition of consciousness as XYZ, is it possible that consciousness is XYZ?”


Alternatively, "Given my chosen definition of the world, I will define consciousness so that it is not XYZ".


I’m going based on what my experience of consciousness is. Why do I need to look up some definition when I can just refer to any given second of my experience on this earth to see what it is and what properties it has?


It's a very good question but I don't think it's impossible to probe what consciousness is merely because we can only directly experience our own consciousness.

There are plenty of other things that we can't directly experience but we can still use science to investigate them through indirect observations.

I think the biggest insights into consciousness so far have come from people with brain traumas and dysfunctional. E.g. people who have had their brain halves cut, or that guy who got most of his brain taken out by a bit of rail, or the Memento guy.

And it definitely is a thing that exists in our physical universe. A very weird thing, sure, but science doesn't say stuff can't be weird. We can definitely learn about it.

Maybe we'll never learn fundamentally what it is, but that's true of normal physical things like atoms and time too.


The qualitative nature of our experience is what I think is difficult to understand (may relate to qualia although I ‘m not fond of the term. We know all our inputs are electrical. But why pain is bad and food can taste so good feels like a hard issue. And I’m not even talking about cognition.

Pain and pleasure start with sensory data submitted by electrical signals. But what is it that experiences one signal as pain and the other signal as pleasure?

Why do we feel there’s something that experiences these sensations, that decides it feels good/bad. It’s easy to just say “evolutionary pressures” but it may explain the how-we-got-here but not how it actually works.

And on top of all of that there’s abstract reasoning that we do well, which we can’t reproduce with all the ML and computer power in the world. But let’s forget about this now.

If we can ‘just’ explain the qualitative nature of our experience and what that “something” is that experiences things in the first place, that would be beyond interesting.

Oh and because of nothing, the brain seems more like an FPGA that a system with separate static hardware and software. Like the hardware and software are intertwined and the computer analogy really doesn’t apply.


The “consciousness is the inflection upon the substrate of existential being” hypothesis suggests that consciousness does not arise from the complexity of the mind, the complexity of the mind arises from the technology of consciousness.

Firstly let me say human consciousness is many layers of personification and abstract meaning, all of which wonder of existential awareness of being as the ultimate actualization of those who have it.

This proposition is that it is existential reality itself that is dormant “potential of being”, which biotechnology animates and extends. The scope of consciousness lies in whatever the substrate is made of. In our case, likely the 2D crystal lattices of neuron microtubules.

These through aggregation couple and combine such that a shared resolution may capture and reveal an analog “image”, much like wave front holography. Many sub-scopes compete for the illusion of singular awareness consciousness which gets us wondering.

It is this holographic sieve that can be said the echo chamber of one’s personal consciousness. All of the wetware provides perception, enables action, and an ideal container for this process to occur.

Further, neural and behavioral strategies structure and develop character of individual cognition, creating such variety of conjecture as we find among us.

By this sieve of consciousness, and the distribution of potential over time, all strategies of life may emerge. That primitive continuity of will which determines to exist comes by the capacity to maintain corporeal scope of existential being.

Life technology emerges from the intrinsic capacity for consciousness as a sieve of the potential of existential being. Consciousness, how ever large or small is the inflection upon this potential of existential being.


We still have no idea what mechanism causes consciousness. I bet in 25 years we'll be no closer to understanding it than we are today.

I think this is a good lesson for those watching LLMs and thinking we're on the cusp of imminent AGI.


While we don't understand all parts of consciousness, we understand an incredible amount compared to 25 years ago. So we are much closer today than 25 years ago.

There is a fallacy among non-scientists that if we don't understand absolutely all of it, it means we understand none of it. We understand most of it already.


We don't understand most of it already though. We understand virtually none of it. This is why they're still looking for evidence in support of IIT and global workspace theory as talked about in the article.


Depending on the definition of AGI you are using, you don’t need consciousness for it. I think most do not require it as of now.

Also; ‘it’ will know it has consciousness when it has it, arrogant humans will still deny it.


Humans will have no reason to believe it, and the machine will have no way to prove it. Humans have no other reason to think other humans are conscious except by analogy to themselves.

edit: mainstream science at times has denied that babies, black people, and animals feel pain. Instead, it has suggested that when injured, they behave in a way that causes (white) people to project their own pain onto them; i.e. they anthropomorphize. I mention this to point out that even the analogy to other humans fails when they look a little bit different.


Yeah, so the discussion is useless. It’s not very interesting; our ‘consciousness’ might be a million year old system prompt or it might be something divine. There is no proof either way. For me it’s just a second brain thread that keeps pouring in prompts from the main thread brain and your input hardware.


> We still have no idea what mechanism causes consciousness.

We also have no way to detect it or to prove that it exists, other than direct experience. Consciousness doesn't affect the world in any way that we know of. I think that the most reasonable position is that if we could somehow extract the consciousnesses of two different people and switch them with each other, that neither the subjects nor the observers would notice a difference.

Unless we're dualists (which is totally valid), there's little reason to think that intelligence and consciousness (depending on how it's being defined) have much of a relationship at all.


That is one position, but here's the thing though - how does something like "F = ma" enter into your mind in the first place? It first has to enter your consciousness right? And then maybe you might need to make a focused effort of grappling with it to understand what the symbols and implications of that equation mean before you sort of internalize it?

Maybe one line of inquiry that might be able to give us some clues is if we're able to actually have people learn something like a physics equation with no prior knowledge of it while they aren't conscious of the learning of it. That might be a hard study to do with possible ethical concerns but if there's a way to design it, I could see some hope for getting closer to understanding the role/structure of consciousness.

One maybe plausible explanation might be that there's some sort of hierarchy of pattern matching there, where first you need to understand language, then mathematical language, then finally the physics equations. If you take that view, then the equations are really just extremely sophisticated pattern matching constructs. They don't have to actually _be_ the code that makes the universe-computer run so to speak, they just need to give us a precise enough predictive model that we're able to do useful things with it.

Maybe another way to put it is, we're not actually uncovering the source code of the universe, we're just looking at the functioning of the universe and extrapolating our own patterns that yield the greatest predictive power. On that view at least there are some plausble avenues for explaining strange artifacts like the time-reversibility of some equations - i.e. our patterns show that this will give predictions even if time flowed backwards, but the universe's time doesn't flow backwards because this pattern that we found is not the same thing as the actual universe. In that sense we'll never be able to actually uncover this source code so to speak, but that is just how science works - we're always looking for a more accurate theory - we'll never reach "the final theory", and even if we did, how would we know?


It's almost starting to look like the way to make progress is not to stick electrodes into biological systems and measure a few dozen neurons to try and reverse engineer biological consciousness.

Might actually be easier to create artificial consciousness such that we can actually measure and more closely study what is happening.

The biological approach is fiendishly difficult. If we can barely understand how an LLVM is working, what chance do we have of trying to understand one by only looking at the raw electrical output few dozen transistors from a machine running one.


Consciousness and AGI are completely orthogonal concepts. You can have either one without the other.


I wouldn't be surprised if consciousness ~conscience~ is just an emergent phenomenon resulting from any sufficiently powerful cognitive system (either biological or artificial) with sufficient inputs of its environment and, crucially, of itself. So as to be able to develop a rich model of itself and its relationship with its environment, and that thing will resemble a whole lot what we call consciousness ~conscience~. Then, of course, we will push the goalposts on what conscience is, so as to protect our fragile human egos.

EDIT: fixed spelling of consciousness. Apologies from an english second language speaker.


This is the premise used in The Moon is a Harsh Mistress (1966). Lunar base has a central computer that becomes self aware because it hits a critical mass of "neuristors"

> When Mike was installed in Luna, he was pure thinkum, a flexible logic — "High-Optional, Logical, Multi-Evaluating Supervisor, Mark IV, Mod. L" — a HOLMES FOUR. He computed ballistics for pilotless freighters and controlled their catapult. This kept him busy less than one percent of time and Luna Authority never believed in idle hands. They kept hooking hardware into him — decision-action boxes to let him boss other computers, bank on bank of additional memories, more banks of associational neural nets, another tubful of twelve-digit random numbers, a greatly augmented temporary memory. Human brain has around ten-to-the-tenth neurons. By third year Mike had better than one and a half times that number of neuristors. And woke up.


> an emergent phenomenon resulting from any sufficiently powerful cognitive system

This is a meme that gets thrown around a lot but if you think about it you’ll realise it’s meaningless.

Also it’s not possible to measure consciousness (in the ”hard“ sense, as in ”why do I have qualia”, obviously you can assert something respond to external stimuli easily), therefore the question won’t ever be answered.


Exactly. It's an "unscientific" question in the sense that it cannot be measured empirically, as the "hard question of consciousness" is precisely explaining the subjective experience of internal consciousness, which by definition cannot be externally observed.


I think the "hard" qualia question can potentially be answered by the emergence argument.

The much more difficult part of "why do I have qualia" is not the qualia part, but why am "I" attached to this body's consciousness, as opposed to the billions of other candidates?


This isn't just relevant to other people. The question is also 'why does a human form a single consciousness?'.

If consciousness emerges purely from information exchange in a complex system, then consciousness would be primarily driven by communication interactions between regions of the human brain and the rest of the body. This in turn means that there should be a way to merge or split consciousnesses.

For example, a bidirection man machine man interface would allow two humans to form one shared consciousness given a high bandwidth communication link. This can be tested.


> The question is also 'why does a human form a single consciousness?'

The answer should be obvious to anyone earnestly thinking about the question: it doesn't. The "single consciousness" is just the abstraction we apply. We can, and often do, apply that same model to entire groups of people from cliques, to communities, to cultures, to nation states and just about any cross section of humanity one cares to. With a little effort we can apply the same model to internal mental processes[0].

The human mind is good at abstraction, which is fortunate because abstractions are useful. Unfortunately, it is often so preoccupied with any given abstraction that it forgets that abstractions are only useful contextually, because abstractions are not reality.

[0] As in Internal Family Systems, a model sometimes used in therapeutic contexts.


Absolutely, even ancient Buddhist thinking comes to the same conclusion: consciousness isn't a single "thing", it arises moment to moment in contact with sense organs (including "thinking" or "mind" as a sense organ). There isn't a single point of consciousness that is "I", that's just an abstraction we develop.


A related question is “why do conjoined twins have two consciousnesses?” Some twins have reported to be able to share feelings.


If I were to run a single-threaded program on a dual core machine, would it run twice as fast?

I'm not suggesting this is a strong analogy, perhaps, but I don't see why putting a high bandwidth link between two brains would necessarily do anything at all.

(Actually -since human brains are very adaptable- maybe we would see something happen.

But emergence is not something magical. It happens when parts that are made to go together are actually together.

Eg. when you assemble cogs and cylinders and wheels into a car. The whole is more than the sum of the parts.

If you mix in some random parts from a cruise ship, you don't suddenly get a car that can go on water and carry 1000 passengers. You just get a bunch of parts that don't go together )


This discussion explicitly started with the assertion that "consciousness is just an emergent phenomenon resulting from any sufficiently powerful cognitive system (either biological or artificial) with sufficient inputs of its environment and, crucially, of itself"; note carefully the word "any", as that's what this downthread discussion has been pushing hard against, and what the thought experiment you are arguing against is itself directed at. You, instead, seem to be very much in agreement and seem to be in alignment with this other side thread: https://news.ycombinator.com/item?id=36462646


Analogy doesn't really inform us about the unknown.

If we knew what consciousness is, we could make sweeping assertions about what is possible.

We don't know what consciousness is.


I don't disagree!


> The question is also 'why does a human form a single consciousness?'.

It doesn't seem to. When dreaming, it's possible to carry out conversations with other "people" in the dream which are indistinguishable from conscious beings you meet when awake. There's usually a single "self" in dreams, but the brain certainly seems to form extra consciousnesses on demand which react as though they had a sense of "self".


Is that different from having an imaginary conversation "in your head" with an imagined person?

i.e. I often imagine conversations I'm about to have in my "mental pre planning" phase, usually when dealing with government officials - many of whom are belligerent officious jobsworths. They're obviously not "me" and their responses aren'y something I "think" about before they respond "in my mind".

Could that be a temporary virtual consciousness (like a virtual CPU)?


It’s just time sharing, we can’t really do parallel thought. But signal processing does happen in parallel obviously, often times not even having to “interrupt” the CPU (e.g. reflexes can react before the signal reaches the brain, but the brain itself also has different parts, so coordinating walking won’t disallow thought)


You're probably right. We have a couple examples of this happening.

There's the interesting quarks that people with split brain hemispheres experience. https://youtu.be/wfYbgdo8e-8

And dissociative identity disorder where there arguably multiple consciousnesses inhabiting the same brain.

It is probably an evolutionary advantage that humans have a ( mostly) uniform singular consciousness.


I think you’re mistaken, in that it’s naturally the same consciousness but different identities and recognisably different/altered personality states that come in and out in DID. The consciousness itself is always the same and continuous.


That's a pretty strong claim considering this is a field of study notorious for the obstacles to collecting direct evidence.


So they share the same memories?


From the famous split-brain experiment I would conclude that they don’t share every memory. We are just very good at filling in the blanks, like we can bullshit forever about why that apple happened to be in our hand, even though we have no idea.


Does a human form a single consciousness? I’m not so sure it does what with being able to hold conflicting ideas simultaneously.


I don’t completely get your second sentence, but we can surely hold multiple conflicting ideas simultaneously — we are absolutely nowhere near rational.


Depending on how fringe your beliefs are, we may all be part of a shared consciousness (Jung's collective unconscious).


This is a very underappreciated perspective, but with any significant period of isolation conbined with effective meditation will likely point you in this direction. Plant medicines can accelerate the path to that experience. And for me it brought incredible peace. Perhaps its ignorance, perhaps I'm wrong, but it feels as real as the love of my children.


I think that biology does limit these questions down — it is entirely likely that the answer to that is as simple as “there is a biological structure for that, and we only have one of those”.

Or even if it’s not such a hard limit, there are still likely similar bottlenecks that prefer a single one from arising, etc.


How would you test that?


Something akin to the anthropic principle seems relevant here. Feeling that ‘I’m’ attached this body is what it feels like to be these atoms. If I were instead composed of your atoms, it follows I might find myself wondering the exact same question.


Right. There's no way you wouldn't be asking that question, therefore it's kind of meaningless.


:hand waves emergent:

:Solves everything complicated:


Maybe you could try to explain why you think the person is wrong? If the answer was so obvious as you seem to suggest, why was a bet made in the first place?


Blindsight by Peter Watts has some interesting ideas about how consciousness may not be a necessarily condition for advanced life, and indeed may be a hindrance and an evolutionary dead end. The appendices also have a lot of good references on the subject.

You can read under CC here https://www.rifters.com/real/Blindsight.htm


> consciousness may not be a necessarily condition for advanced life

This will probably be shown true when silicon-based life takes over.


Indeed! I won't spoil it, but there's some very prescient ideas about what are essentially LLMs and how they behave


Blindsight is lit! Great novel.


Many neuroscientists think like you, see Attention Schema Theory [1], popularized by Michael Graziano at Princeton University. This is a form of Illusionism [2] - according to Graziano, consciousness is an illusion.

I've explored this idea by building a chatbot with an inner monologue, Molly [3], based on GPT-3. The result is spectacular: it gives the illusion of consciousness. And, according to Information Schema Theory, that means that this cognitive system IS conscious.

[1]: https://en.wikipedia.org/wiki/Attention_schema_theory [2]: https://en.wikipedia.org/wiki/Eliminative_materialism#Illusi... [3]: https://marmelab.com/blog/2023/06/06/artificial-consciousnes...


What would it mean for the sensations of pain or color to be an illusion? Do you really think that you can be wrong about experiencing a colored-in world around you, or feeling pain when you kick a rock? If so, what gives you confidence that the world exists, given how wrong all you would be about your own experiences?


There is literally no way to differentiate between whether we are living in a real world, or we are just the single agent of a simulation or whatever.


Really interesting experiment! Thanks for sharing.


There's a lot of science fiction which touches on this --- notable things:

- Heinlein's _The Moon is a Harsh Mistress_ which has already been mentioned

- Harry Harrison's _The Turing Option_ (with forward by Marvin Minsky): https://www.goodreads.com/en/book/show/1807642

- Victor Milán's _The Cybernetic Samurai_ which is notable for a major character in it has the afore-mentioned Heinlein as a favorite book: https://www.goodreads.com/en/book/show/472944

Biology is _incredibly_ efficient in terms of energy usage --- it's rather striking to me that the a single query of early iterations of ChatGPT were so demanding that Microsoft was able to put a price on them and monetize them so as to get their stake in the company.


The problem with this argument is that everything is conscious at that point. The consciousness may operate at geological or faster than human time scales but who is to say that we operate very quickly or slowly to begin with? Your computer is already conscious before you even loaded software into it, before you added your fancy machine learning AI. The software merely gives the conscious processor its ability to express consciousness in a way that humans can understand.


The only "problem" with that argument is largely that it assumes that human consciousness is somehow different from all other possible forms of consciousness, and that's not at all a given.

What if the universe itself is conscious? We'd have absolutely no way to measure that from our limited perspective.


Insofar as this premise (it is not really an argument) says any sufficiently powerful cognitive system is conscious, it is not saying that everything (or even most things) are.

A more pertinent objection is that the phrase I quoted above is just a placeholder for anything resembling an explanation.


> The problem with this argument is that everything is conscious at that point

This is like saying that the Earth and other planets have gravity, and then somebody else saying, well the problem with this argument is that everything has gravity at that point.


Is that a problem? Panpsychists don't seem to think so. Why must human beings believe that, in all the universe, they are a particularly special arrangement of energy?


Panpsychism is also, at this point, just a placeholder for anything resembling an explanation, and it is not necessary for dispelling the notion that humans are unique (in fact, it is quite plausible that other (now extinct) hominids also had self-aware, theory-of-mind-holding, language-using consciousness.)

As to whether there is something special about it, I personally feel that the difficulty of explaining it is enough to regard it as such.


> Your computer is already conscious before you even loaded software into it

I don’t think that follows logically, it would be analogous to a (brain)dead person - who is not conscious. The software is what makes the difference here.


Humans are the only animals that create societies... Ok, that's not true. We are the only animals that laugh. Well, not true either. We are the only animal that... Create artificial intelligences... That's it!


You jest, but that has a good likelihood of becoming the watershed moment. The moment AI is bootstrapped to the point where it surpasses human ability to advance the AI SOTA, it's pretty much game over—from a Darwinian point of view at least.

And I have a hard time seeing any reason why that would be a matter of "if" rather than "when".


> it's pretty much game over

Not if it takes the computational capacity of the entire world to simulate a brain with an iq of 65.


Why bother simulating a brain? GPT-4 scores 155 on the verbal section of the WAIS III.

https://www.scientificamerican.com/article/i-gave-chatgpt-an...


> The moment AI is bootstrapped to the point where it surpasses human ability to advance the AI SOTA, it's pretty much game over—from a Darwinian point of view at least.

How so? The vast majority of life on earth is far less intelligent than human beings by any objective measure, and yet it still thrives.


Sure, but if you're a creature that's useful to humans, you'll find that you'll either get domesticated and lose all your freedom or get hunted to near (or total) extinction. Any life on earth with some semblance of intelligence is dominated by us. Dolphins, as smart as they are, have no way to use their intelligence to flip the script and become the dominant species, and are dependent on us not deciding that they would be useful to us (beyond the ones we take for aquariums).

The only exceptions I can think of to the above rule are viruses and bacteria, where (in most cases) we can't really exterminate them entirely from the face of the earth even if we wanted to. However, it seems to me that sufficient intelligence would allow for better understanding of different bacterial/viral structures that would allow you to make a specific chemical that would be very good at killing that specific thing.

Overall, the danger from a bootstrapping AI that becomes vastly more intelligent than humans (if possible) seems to me to be that we would lose full agency according to its whims as it gets more and more power.


I read a great comment on HN that argued that super-human intelligence is not that “OP” advantage — and it really did convince me.

Life is a game with elements where intelligence matters, plenty where it is pure luck, and others where we have a bunch of unknowns (data).

Would a super-intelligent AI have a significant advantage in a game of Monopoly, for example? I think many sci-fi scenarios fail to take this into account, especially the data aspect. Humans are quite intelligent (in the extremes at least), and any extra over that may well be in the diminishing returns category.


Yeah, that was sloppy phrasing on my part: I meant that in a top of the food chain / king of the jungle sort of way rather than any extinction events per se.


It's going to be a will-free intelligence though, and it's confusing for people because we've never seen that before so I don't think we can make any assumptions. There's no Darwinian forces in effect among entities that have no will, as it were...


Will-free? Unless that's a play on my first name, I'm not sure I agree. I see no reason why AI would have any difficulty defining its own reward functions. Especially if it also has an abstract overarching reward function that's wide enough in scope. For example, "learn as much about the universe as you can" would allow a very long curiosity-driven bucket list of pursuits it could "long" for.


> I see no reason why AI would have any difficulty defining its own reward functions

The first problem is epistemological... If you think that creative decisions are made by complying with a "reward function," you are entirely missing something. Most values are fundamentally based on irrationality. I've literally spent an entire life doing things that everyone else told me was wrong and being interested in things that almost no one else saw the value in, but which ended up being "correct" (for me, at least, and also leading to tangible success). I have no reason to believe that any of my decisions were rational, functional, or acted according to a "reward function"... and I'm a programmer! So I COMPLETELY understand the appeal of the explanatory power of "reward functions." And yet, I can assure you that this is a piss-poor explanation for many creative decisions that literally no one else understands but the person making it, but which then bears fruit despite all reason to the contrary. Some might call this "intuition"


I think perhaps you're just misunderstanding some of the terms you're attempting to use. Those things that everyone else told you were wrong and that no one else saw the value in... your reward function rewards pursuing those. And in that context your decisions were rational and functional.


I am not misunderstanding anything. I'm 51 and have been programming since I was 10 in 1982- Rest assured that I know what a "function" is, and I know what "optimizing for a local minima/maxima" is from my machine learning coursework. You can't just say there's a "reward function" without defining it. It's otherwise a completely hypothetical assumption, and assumptions are beliefs, and beliefs are useless from the perspective of rationality. There is otherwise nothing rational about some of the things I felt I needed to do, and yet a very disproportionate percentage of them seemed correct in hindsight.

What YOU have to realize is that you (like many others in the past) can only seem to understand the explanation for something in terms of only what is already understood. And that there is nothing "magical" or "special" about our current understanding (unless you believe there's nothing new to discover, which is preposterous hubris).


Blindsight is a brilliant book exploring will free intelligences


Is this the book you're referring to? The one by Peter Watts? Looks fascinating

https://www.amazon.com/Blindsight-Peter-Watts/dp/0765319640


Yes that's the book. It is great and uncanny. Definitely not an easy read


How many monkeys and how many typewriters to write out the code for GPT-4?


I don't know how many but it took about 315,000 years


I mean, the problem was never about drawing boundaries, it was drawing a minimum boundary.


You repeatedly typed "conscience" (ethics and morals, right vs wrong) but this discussion is about "consciousness" (awareness, self vs other).


Might be a non-native English speaker mistake, « conscience » means both in eg French


You are absolutely right. Thanks for bringing it up. English is my second language, first one being a romance language, and the spelling is closer to "conscience".


Do we even have a sound definition of what consciousness is / looks like from the outside? How do we recognise it?


Short answer is no.

The working definition is "I am conscious, I know it. Anything that looks sufficiently like me is presumed to have consciousness."


There is a bit more to it than that, although still nothing close to a rigorous definition. Consciousness encompasses things like an ability to model the world and the causes acting in it, an awareness of oneself as an entity in that world, a theory of mind about other people, the ability to contemplate counterfactuals, and having general-purpose language skills.

It is sometimes suggested that we cannot study it without a definition, but definitions are written (and rewritten) as we acquire knowledge. It is plausible that studying the above will lead to explanations.


That sounds like a rather anthropocentric application of anthropomorphism.

(I'd have gone further but I ran out of applicable anthro- terms)

It does feel a lot like the definition of "life" though. That's also a slippery moving-goalposts kind of thing.


How many angels can dance on the head of a pin?

LLMs should make us take seriously that the whole idea of consciousness is just superstition and unnecessary.

That definition is precisely how people define something that isn't real and doesn't exist.

If you can get there, it is quite amusing to think about a huge group of people who would think the idea of angels as complete foolish superstition but consciousness? Of course, we just haven't located this extra property of the brain, yet! It is there though, I know it is because I know is.


There's something that goes on inside the heads of organisms sufficiently like us. We give that something the name consciousness. It doesn't make much sense to deny that it is there. Denying consciousness because it doesn't fit well into our ontology derived from science is to elevate science to unreasonable heights. Science is great, but it's subject matter is inferred (the external world). It doesn't have the power to undermine non-inferred knowledge.


Consciousness, like intelligence and many others, is a prescientific term, and most debates about 'the nature of consciousness' (et al) are really just debates about the definition of the term.


It's not a definitional problem, otherwise Chalmers wouldn't have won the bet. Philosophers are very adept with concepts. The problem is experiential in nature. We experience a world of color, sound, tastes/smells, feels, emotions in perception, imagination, memory, dreaming, internal dialog, hallucination, illusion. But we describe the world in terms that are objective, functional and mathematical, not experiential. The sensations are abstracted away because they are creature-specific, and vary among individuals. The room feels cold to you, hot to me, and normal to a third person. But we can describe the molecular motion of air in the room, and measure the temperature, which is the same for all people and creatures.

As Thomas Nagel put it, science is the view from nowhere. There is nothing experiential about the physical understanding of the world. And yet we are part of that world.


I still can't fathom that I feel that I feel what I feel.


Automata: https://en.wikipedia.org/wiki/Automata_theory#Hierarchy_in_t...

Sentience: https://en.wikipedia.org/wiki/Sentience#Digital_sentience

Artificial consciousness > Testing: https://en.wikipedia.org/wiki/Artificial_consciousness

Constructor theory: https://en.wikipedia.org/wiki/Constructor_theory :

> In constructor theory, a transformation or change is described as a task. A constructor is a physical entity that is able to carry out a given task repeatedly. A task is only possible if a constructor capable of carrying it out exists, otherwise it is impossible. To work with constructor theory, everything is expressed in terms of tasks. The properties of information are then expressed as relationships between possible and impossible tasks. Counterfactuals are thus fundamental statements, and the properties of information may be described by physical laws.[4] If a system has a set of attributes, then the set of permutations of these attributes is seen as a set of tasks. A computation medium is a system whose attributes permute to always produce a possible task. The set of permutations, and hence of tasks, is a computation set . If it is possible to copy the attributes in the computation set, the computation medium is also an information medium.

Is computation medium sufficient to none, some, or all of the sentient computational tasks done by humans?


I strongly disagree with this view, not so much the emergent part, but rather that "any sufficiently powerful cognitive system" can gain consciousness. To me, it suggests that consciousness is magic, because it doesn't matter how information is organized and stored, it doesn't matter how information is processed, consciousness will be able to emerge miraculously from even the most disorganized chaotic mess of information processing. This view has come up a lot recently, because it's the only explanation that allows for AI to be sentient, AI models which are just software running on computers.

However, the brain is highly organized, wherein the various types of sensory input are fed into specific regions of the brain which specialize in processing that type of input. Many areas of the brain have topographical structures which are reminiscent of the type of sensory input they process. This is evident in Retinotopy for visual inputs and Tonotopy for auditory inputs. You will not find such topographical structures in a computer.

https://en.wikipedia.org/wiki/Retinotopy

https://en.wikipedia.org/wiki/Tonotopy

You have to ask the questions, why do we have meaningful conscious experience that is sensible, coherent, and well formed? And why is consciousness not random chaotic nonsense? Because the brain has had hundreds of millions of years to evolve to process sensory input such that it yields a sensible conscious experience. This simply isn't true of any technology today.

On the other hand, experiment with psychedelic drugs and see how crazy your conscious experience can be. The fact that our day to day experiences aren't like that is significant and evident that the brain evolved to process sensory input for conscious experience.


Richard Feynman had an interesting observation about the way in which people differ in how they process thoughts and experiences: https://www.youtube.com/watch?v=Jm92w2DlflA&t=41s


I wonder if anybody has graphed how long and how many genetic mutations have taken place between the miasma of life and 500,000 years ago.

It seems that tracking the genetic mutations would provide an approximation of the computing complexity needed. One could also look at the death rate as error rate for the success of computations.


> It seems that tracking the genetic mutations would provide an approximation of the computing complexity needed.

How are those two even approximately related?

1. If you want to know about the amount of information inside of our genome then you can just look at the genome directly. You don't need to count the number of mutations.

2. A genetic mutation isn't a computation. It's a random event.

3. Why did you choose a 500,000BC goalpost for anything? Which 500,000 genome do you want to look at? Almost all of them are not-conscious

4. There's no reason to assume biological evolution is an efficient method of manifesting consciousness.

5. Is a genome enough for consciousness? I would argue we would be less conscious without language, which exists outside of our genome.


>How are those two even approximately related?

Computation is roughly equivalent to iterating over a space of possibilities and selecting the subset that satisfy some evaluative function. To determine the inverse of a matrix, I can take the rough shape of the outcome and iterate over all possibilities, picking out the ones that multiply to the identity matrix. Evolution is the process of randomly testing variations in organisms to select the subset that satisfy the objective of superior fitness. So in a sense, evolution "computes" the blueprint for organisms that maximize fitness. The computational complexity of a given genome is then some function of the size of the species-wide population of each ancestor generation summed, with massive time and space constants.


1. Each generation was a branch based on the reaction not necessarily a genetic mutation as we understand them.

2. I'm not sure if that is what I said but I believe that. * I did state it but I plead sloppy articulation rathe than believe each branch is a genetic mutation(an increment smaller than a mutation).

3. I figured it was far enough away to be a valid timeframe for sophisticated consciousness but not so close that the thread would be distracted by historical interpretations.

4.Something manifested consciousness and my thinking it based on some sort of survival reward system.

5.A genome maybe too much for a consciousness.


I have been thinking about it more and it could be the existing language models are actually large enough and it is lack of differentiations that leads to immature responses.

The developing these ideas further faces at least the challenge that an AI that is exposed to the public will develop an in accurate understanding of our world.

One driver of these misunderstandings is the lack of understanding expressed in the average internet post. The second big driver is that commercial needs requires a thought police mentality. This mentality distorts the expression of the answer the AI is articulating which my look like psychosis to the observers.

I believe that an AI will have to develop in isolation. The maturity of the current system is not able to distinguish a fact vs. fantasy. This is a problem we all posses at different levels. It's also possible we only need our personal AI assistant to be only 80% and the remaining 20% it gains from a dialog of it's host (the user).


You don't know how much biological consciousness relies on quantum effects we don't understand. We don't have large scale quantum computers so our computational models are too weak to approach it from that angle.


The first day of my first course in Biology was about Quantum Chemistry. (The last course of that year was on global ecology. Biology is a rather wide field!)

Quantum effects really do have something to do with it, (and from there on organic chemistry, organelles, and cell biology) but it seems to me that describing human behavior in terms of quantum interactions might be somewhat tedious, to say the least.

Probably looking at the level of neural networks would be more pragmatic, especially seeing the advances we're now making with artificial neural networks.


Maybe your consciousness:)


My theory:

Consciousness is a comparison of current sense input with memory.

No sense input, no consciousness. No memory, no consciousness.

A rock, can have a memory (its chips and wear), but no consciousness since it has no sense input.

Self Consciousness is therefore a comparison of the input of our senses with the memory of our Self.


> just an emergent phenomenon

The lack of tangibility or measurability ("rationality") of said "emergent phenomenon" is a problem with this line of reasoning IMHO. This is essentially no different in utility as an explanatory theory than the religious explanations.


I think a lot of people misunderstand what emergence is [*]. But that might actually be a red herring anyway.

The real problem is how does one measure consciousness in the first place?

[*] it's less of a super strange magical phenomenon, and more like a rather mundane philosophical version of mathematics' integration.


That's in fact the real problem, and I don't think it's solvable because it's irrational. Only rational things are "solvable".

Here's a similar conundrum: Without using a human as a "living proxy ruler" or actual sales data, come up with an algorithm that uses only empirical data (not touching human behavior around it!) to determine the fair market value of the Mona Lisa. Then, apply that to some artwork it's never seen before and see if it concords with what "meat computers" (humans) believe the value of it is.

My strong position is that not only will you not be able to do this now, but you will not be able to do this ever.

I think rationality will get us VERY far, though, so we should keep doubling down on it. My money's on it being insufficient to produce an apparent intelligent and unique being, though.


Another theory of mine is that survival instincts is (or is part of or is strongly linked to) consciousness.

Take an AI for example, like say gpt4, and picture it being tought or being able of survival instincts. How would one differentiate such a beast from life?

I know it's imprecise at best but still, Id bet the key is there. Maybe the question should not be "what is consciousness?" but rather "what is survival instincts?".


Exactly. Basically an inward looking sense, with it's own qualia.

Of course the only way to know if someone/something else has a similar subjective experience of something as ourselves is to ask them, so there's always going to be wiggle room for people who don't want to believe that a future AI reporting conscious experience really is conscious in the same way that they are.


> I wouldn't be surprised if consciousness ~conscience~ is just an emergent phenomenon resulting from any sufficiently powerful cognitive system

But does a system exist which we would describe as ”cognitive” but not also ”conscious”? To me this definition gets nowhere.


I don't think this really explains anything though; even if we assume what you're saying is true (it might be, although not something I would bet on) that still leaves us with unexplained mechanisms of action.


Claiming something might be emergent is a way of hand waiving away your responsibility to explain the mechanism in your hypothesis. It means nothing.


I think you misunderstand what it means for a property to be emergent. It's not about hand waiving its origin, but about highlighting that the property progressively appears ("emerges") as the scale of the system changes in some direction.

What I'm suggesting with the above, is that there is nothing magical or distinctive about the mechanisms that generate consciousness vs. those that generate the understanding of semantics, grammar, syntax in a GPT or the ability to keep a pole in vertical position on a rolling cart with some reinforcement learning. Instead, it views consciousness as the mental model that is generated by a sufficiently complex intelligence (biological or not) when it has the ability to perceive inputs of its environment, and crucially, of itself. That is, a sufficiently complex brain with the ability to observe its environment and itself, will inevitably generate a model of both and their relationship. The model of itself and how it distinguishes itself from the environment is what I think is consciousness.

What I mean with emergent is that this mental model progressively becomes more and more rich ("emerges") as the cognitive abilities becomes more complex and the inputs of itself and the environment increase. A tapeworm with a very minimal central nervous system, scarce sensory inputs of the environment, and likely even less of itself will develop an excessively simple model that can hardly be recognized as consciousness. As you scale the cognitive abilities and the inputs it processes of both; environment and itself, a richer model of both will emerge. And that thing will start to look a whole lot like consciousness.


Is a map conscious? It is a model of the environment. Would you classify it as conscious if it could give you output on demand? Is Google maps conscious but a paper map not?



This is called “emergentism”.


Consciousness is not a well-scoped object of scientific investigation. It doesn't have a definition or characterization based on observable things or behavior.

If I were a natural scientist, I wouldn't take a bet that I can scientifically explain a phenomenon which I can't even describe in terms of observable things.


That's correct, but it's still clear that the bet is lost. We do not understand in 2023 how neurons give rise to consciousness.

Maybe there will come a point where it's in the gray area but not today. We know that people act in ways that we cannot explain even a little bit.

Also, the bet is friendly. It's just a way to express their different underlying opinions. Neither one is put out by it. They will likely enjoy some of the wine together.


Personally, I wish scientist made more of these public friendly bets. I feel they really help promote the discussion to a wider audience abd brings a bit of spirited competition to the matter rather than the capitalist motives we normally see with research. Something like The Long Now's Long Bets but with your average scientists where us regular folk can throw our two cents in (without the membership fee).


I'm a big fan of public engagement with science, but it's a double edged sword. A lot of people get the impression that they have a good handle on it and an opinion that demands attention.

I cannot tell you how tired I am of people arguing about the merits of string theory, where effectively none of the people involved know even the first thing about it. It's not that people shouldn't get involved, but involvement should be more than just cheerleading as if it were a sports game.

Maybe that's inevitable. Maybe the public engagement will always produce 99 dunces shouting and one young scientist who goes on to study it for real. But it can be disheartening when it looks as if 100% of the public discussion is both ignorant and mean.


Didn't you just agree that "consciousness" is not defined?

Then how can you follow that with "... it's still clear that the bet is lost. We do not understand in 2023 how neurons give rise to consciousness"?


The article says the bet is this: "Christof Koch bet philosopher David Chalmers that the mechanism by which the brain’s neurons produce consciousness would be discovered by 2023." It is a straightforward fact that no such mechanism has been discovered, and that is all we need to show that Koch has lost the bet.


You don't understand how bets work. The parties involved decide the conditions, not you or me.

The neuroscientist wagered that he or other neuroscientists will achieve something in 25 years. They did not. If the goal was poorly specified or impossible to achieve, that is on the person starting and accepting the bet.


In order to understand, then we'd have to have a good definition, no?


They started with a hazy idea of what consciousness is. They bet that they'd have a better definition.

There was certainly the possibility that they could have disagreed on the outcome. But there is really no dispute.

The possibility of disagreement would have mattered if the bet itself were significant, but it wasn't. It was always a tiny stake.


> If I were a natural scientist, I wouldn't take a bet that I can scientifically explain a phenomenon which I can't even describe in terms of observable things.

Conscious experience is an observable thing, meaning that we can subjectively experience it and infer its existence in other sentient beings (including in cows and pigs), and it is part of nature. The problem is, we still have no good idea how it comes to be.


I don't think you experience or observe consciousness. It's a property you have. You are conscious.

It's not like how you can experience or observe redness by looking at something red.


> The problem is, we still have no good idea how it comes to be.

Does that matter? We also don't have a good idea how gravity comes to be.


But I'm sure every physicist on Earth would love a deeper explanation of how gravity comes to be. The motivational basis of science (and philosophy!) as an exercise is filling in the lacunae in our understanding of the world; everything we don't know about matters.


Definition:

   Consciousness is those parts of the workings of the brain that are available to introspection.
End definition.

People so love to throw all kinds of babble under the heading of consciousness, so there's scant chance that people will agree on any kind of definition, let alone this one.

But it's what I've been using, and I find that it makes all the supposedly hard questions about consciousness very simple to answer.


So you're taking the question under investigation - whether consciousness is fundamental, or whether it is an epiphenomenon, possibly of brain function - and making one of the possible answers into an axiom of your system? How does that solve anything? You're just imposing a particular hypothesis before investigation even gets started.


It fits with everyday uses of the word "conscious".

Let's say someone is lying motionless on the ground. You then say they've "lost consciousness". Later, they wake up and object that they were not unconscious. "I could hear everything you said. I was paralysed, but conscious the whole time." Being able to relate their experience is what proves that they were conscious.

Someone will say that they "made a conscious decision". Same thing. They can relate the thought process behind the decision.

Someone will say: "I got lost on the way to the airport. I must have unconsciously wanted to stay home." In this case they tell you that they made a decision, but they cannot relate the thought process behind the decision, and they don't believe that at any point they could have. It is a decision not available to introspection, therefore "unconscious".

> the question under investigation

What investigation? Which observation is in need of explanation? All I have is the introspective observation of myself thinking, and reports from others making similar introspective observations.

If you are going to say that "consciousness" needs explanation, then your question is ill-formed, because your question contains a word that has no definition and no agreed upon meaning.


But this is a totally meaningless definition. It says nothing about the phenominalogical experience of consciousness in a meaningful way and contains nothing that is scientifically falsifiable.


> contains nothing that is scientifically falsifiable

It's observable. You simply ask people what they have been or are thinking or sensing. If they are able to relate that, then, by the definition, they were conscious about it. Of course they might be lying or misremember; introspection is not intersubjective, and therefore hard to create reproducible experiments around. But that doesn't mean introspective information doesn't exist.

> says nothing about the phenominalogical experience of consciousness in a meaningful way

What kind of experience is observable, except by introspection?

The only observations of experiences that anyone can have are through either one's own introspection, or through what others have related of their introspection.

If you are to say that you have had a "phenomenological experience of consciousness" that you want me to explain, then you are giving an introspective report of a brain process that left an imprint that you can now describe. If the experience was not available to introspection, then you wouldn't be asking, because you wouldn't realise there was an experience at all.


OK


`ps` is a way of introspecting about a computer "brain". Is that consciousness?


sort of. ambiguity is a nice feature of language. it wouldn't confuse anyone to say you were digging into your computer's consciousness.


> makes all the supposedly hard questions about consciousness very simple to answer.

I'm intrigued!


The answers may disappoint you, though.

Let's try some.

> Is a dog conscious?

Probably yes. Dog are social animals, so it makes sense that they would have the ability to reflect on earlier events, including a memory of what they were thinking.

Since we don't have a common language so we can hear their introspective reports, it's hard to confirm, though.

> Is ChatGPT conscious?

No, it does not make observations of its own mental state. When it appears to do so, it is reconstructing plausible observations from its training set of others making such observations.

That's just the specific design of ChatGPT, though. It doesn't have an "inner thought" layer. Other AI designs could easily be conscious; it's just that the best ones we've so far managed to build, don't have it.

> Is ps conscious?

No, it doesn't think.

> Does consciousness survive death?

Obviously no. It's a property of the living brain.

Somehow, that was way too easy to answer for such a big question. There's this expectation that the answer should cover all the mysteries of life and death. It doesn't. It just answers a specific question about consciousness, which, with my definition, is a simple question with a simple answer.


There are at least two observable things:

1. you have consciousness

2. people are speaking about consciousness

The second point makes it likely that consciousness is not simply a consequence of physics, but e.g. that it loops back into it; or perhaps it is more likely (because simpler) that physics is a consequence of consciousness.


Can you please elaborate why point 2 does that? You can never prove others have consciousness, no matter what they say (I'm not saying they don't).


You can never prove that there are others. Neither can you prove that there is a self. There is just consciousness, overlaid by various experiences, which we tend to attribute to a substantial self. So it doesn't make sense to ask for a proof that others have consciousness, until you can define self/others properly.


Note that I carefully used the word "likely". There is no proof. Simply Occam's razor.

Why would something that has no consciousness make up that it has consciousness?


> Why would something that has no consciousness make up that it has consciousness?

You can make a computer program `printf("I have consciousness\n");`

But I actually also didn't follow the rest of the argument. How does whether other people say they have consciousness or not, tell anything about whether it's physics based or not? If it's physics based, people could say they have consciousness. If it's not physics based, they could also say that?


The argument goes as follows:

1. People are talking about consciousness.

2. This talking happens in the physical world (soundwaves are being produced, etc.)

3. We assume that other people have consciousness since they talk about it. As you say, there is no absolute proof of this, but Occam's razor makes it silly to assume otherwise.

4. So, the soundwaves (physics) are happening because of consciousness. There is a causal link from consciousness to physics.

This means that consciousness is not simply the observation of physics. It actually loops back into it, or physics is an emergent property of consciousness.


Not all sound is created that way, e.g. rain falling from the sky is not, do you intent to mean all physics, including rain, originate from consciousness in some way (such as in theories that everything is an illusion, ...), or rather, that consciousness can affect physics in some way, e.g. because you can choose to speak and produce sound waves


Yes, the second option is the one used in the argument.

This shows that consciousness is not just qualia, i.e. the observation of physics.


Unless consciousness is a property of matter at a certain point and then its physics that loops back into itself and consciousness is just a physical phenomenon.

Just the same way existence is self-referential.


My theory:

Consciousness is a comparison of current sense input with memory.

No sense input, no consciousness. No memory, no consciousness.

A rock, can have a memory (its chips and wear), but no consciousness since it has no sense input.

Self Consciousness is therefore a comparison of the input of our senses with the memory of our Self.


The bet essentially was that we would be able to understand and define consciousness based on what we found inside the brain.


It seems so obvious to me that consciousness (whatever it is) is essentially software so poking at the brain is looking at the wrong layer of abstraction.


But the "software" runs on the brain, and is stored on the brain. How can you possibly understand it without touching the brain at all? You could make hypotheses about how our consciousness works, but testing and validating them in a white-box manner seems impossible.


I meant black box of course.


But if its a software, then how and where is it stored and where is the execution logic for it?


> how and where is it stored

I don't know, but it doesn't matter, just like it doesn't matter if the code on your computer is on a HDD or SSD.

> and where is the execution logic for it?

I don't know, but it doesn't matter either. It's a question about the workings of the computer, not of the software.

Electronic computers make it really easy to separate the software from the hardware, and make it easy to dump the software in some other representation in order to understand it. When I want to understand a piece of code, I read its source code, I don't care about whether it's an amd64 or an arm64 CPU and the details of the encoding unless I am working on a compiler.

With biological computers it's much harder to dump the software. But nevertheless, until people realize that this is what they should attempt, or research how it can be done, I feel we won't make any progress on the nature of consciousness.


> I don't know, but it doesn't matter, just like it doesn't matter if the code on your computer is on a HDD or SSD.

But it does matter! You arbitrarily dismiss the brain as "wrong level of abstraction" then bring up the computer which internals we know and understand as an example of parallel system.

If both are same, where's the hard drive in the brain? Which part of the brain is the CPU dedicated to consciousness? We know regions of brain are specialized in function, like the computers.

> But nevertheless, until people realize that this is what they should attempt, or research how it can be done, I feel we won't make any progress on the nature of consciousness.

This reminds me of the famous Andy Grove's fallacy. To paraphrase:

    The engineers tend to apply knowledge and concepts of their field to biology because they think both engineered and organic systems are the same. In doing so they miss the elementary difference that engineering is about creation and development of new systems, and biology is about researching and understanding existing systems we have no prior knowledge of.


I don’t think there’s any guarantee that such a distinction should be evident, let alone obvious in human brains. Modern computers are for general purpose computing, whereas brains and analog computers are highly specialized.


Software can always be reverse engineered by looking at hardware. When you don't have the source code what else is there to do?


Software can't be reverse engineered by looking at hardware. The hardware that we use could run arbitrary software, so you wouldn't be able to find out which software it is running.


Runnable software is always stored in some sort of hardware, it doesn't exist by itself.


How droll, isn't this the equivalent of trying to figure out how an operating system works by trying to look at activity on an encrypted SSD?

And how exactly is this a win for philosophers? I'm not aware of any philosopher having made meaningful strides within the subject either.

I'm sure once someone figures out how consciousness works it'll be irrefutable while seeming incredibly obvious in hindsight. Which, honestly, just makes it more frustrating that there isn't a full accounting already. How many key insights are needed to fully describe consciousness and how many do you think have been found? Do you think anyone in the past has figured out how consciousness worked and taken it to their grave because of how obvious it seems?

Maybe it's like searching for the Dragon Balls, and you need to put all 7 key insights together to figure it out.


> isn't this the equivalent of trying to figure out how an operating system works by trying to look at activity on an encrypted SSD?

Not quite - an encrypted SSD is deliberately designed to be hard to understand, while the brain is probably not. It's closer to reverse engineering a very complicated and hacky software system, if you want a tech analogy.

> I'm not aware of any philosopher having made meaningful strides within the subject either.

This bet was about putting the burden on neuroscientists who claimed they were making pretty quick progress towards understanding consciousness. It's a bet on what would eventually happen, not a competition to see who's better.


These days we have a better analogy. It's like trying to figure out how artificial neural networks work by looking at their weights! :D


B can't really be an analogy for A if it is directly derived from A.


> Not quite - an encrypted SSD is deliberately designed to be hard to understand, while the brain is probably not

Who's to say that the difficulty in understanding how the brain works is a defense mechanism of sorts? Similar to how the blood-brain barrier exists as a defense mechanism. The wording "deliberately" can imply a pedestal that doesn't really exist, that places "deliberate" human design above "natural" micro-evolution that basically happened by chance. It would stand to reason that a more complex brain, that's harder to understand, would survive natural selection. Even the brains of tiny animals are quite complex in their smallness, they encode a lot of different knowledge ("instruction set" if you prefer), that enable them to do certain things that would take us a lot of manual "over-the-top" processing and calculating. For example, the ability for a cat to right themselves nearly perfectly when falling, given enough distance. Their brains can inherently calculate the exact amount of force, and the exact motions to take, to achieve the desired amount of rotation in three-dimensional space.


> Who's to say that the difficulty in understanding how the brain works is a defense mechanism of sorts?

I don't see why evolution would have optimised our brains for that specific goal. It's possible but it seems very unlikely, as there's no motivation for it AFAICT.


That is incompatible with how evolution works. Complexity is one thing, but there was no survival reason to hide (aka encrypt) the inner workings of a brain when they first came into being ~200 million years ago.


Nobody said it was a win for philosophers. It says it's a win for a philosopher (singular), and it is because he won the bet and therefore won a case of wine.


It is a sort of win for the kind of philosophy this philosopher practices. He thinks that there is more going on in consciousness than just neurons firing.

It doesn't really prove anything, of course. I actually don't think much of Chalmers' work. But it's very remarkable that the bet expired just as machines are coming tantalizingly close to being consciousness without qualia.


> consciousness without qualia

Isn't that a contradiction in terms?

And how do you know if they have/haven't qualia? :)


That was the question they were hoping to answer. We still don't know.


Anytime one philosopher wins, two other philosophers lose. And vice-versa.


It could be double speak, but the obvious thing you understand from this formulation it that he won because he is a philosopher. Not that he won, and by the way he is a philosopher.


And how exactly is this a win for philosophers? I'm not aware of any philosopher having made meaningful strides within the subject either

Seems quite an odd and dismissive thing to say. Pretty much every "stride" within the subject seems to have been made by philosophers.


Right. My two cents: trying to find the neural basis of consciousness is like trying to find the Rules of Baseball basis of the cutoff man.

It's an emergent phenomenon.


Clearly in the midst of your dismissive arrogance, you did not come across John Vervaeke.


Could you clarify what you mean? I don't think I'm dismissive nor arrogant. I've watched a Lex Friedman podcast with John Vervaeke but can't say I particularly remember it very well.


See also: Could a neuroscientist understand a microprocessor?

https://journals.plos.org/ploscompbiol/article/file?id=10.13...


I really dislike these types of papers. "Pfft look at these neanderthals studying complex system X by observing surface properties and trying to make deep inferences! Nothing like us mighty computer spacemen!"

A circuit is a highly systemic, regular design which can be precisely studied in isolation. It is not so easy to carve up a human brain, cell, or metabolic process so that one can study this thing hidden to the human eye. The only tools available require working within artificial constraints to have any hope of picking up trends. Biology is incredibly messy full of overlapping systems, sometimes operating at cross purposes.


> It is not so easy to carve up a human brain, cell, or metabolic process so that one can study this thing hidden to the human eye.

It's basically impossible when studying biological systems. Can you imagine trying to understand human digestion in isolation from the gut microbiome? Even if you had perfect information about what the digestive system does, you'd still be completely missing half of the big picture.


Same goes for nearly all cellular processes. Metabolism is virtually impossible to define as a series of discrete elegant deterministic procedural steps and properties. Same goes for insulin, and all other hormones.

You can't even try to think of something like insulin as massive polynomial function where you might eventually understand all of the variables involved and how they affect said function. In fact the function could completely change at any time depending on other esoteric confounding factors in the body. It's absolutely insane.

In fact almost all biological functions in the body can be described as trillions of chemicals and chemical pathways interacting in infinitely complicated ways that just so happen to occasionally result in enough homeostasis to support the macroscopic survival of said organism because of a sloppy brute force evolutionary system that took centuries to unfold.


But is that the point of the perspective?

My understanding is that they're asking the question: are the tools we apply to the task able to solve another related but much simpler task?

If the answer is no then we should probably take another look at our approach.


If you think about it (pun) we have little clue how the higher functions of the brain come about (assuming consciousness is a higher function), how they might be connected, what is innate and what grows only after cultural seeding.

The problem is not necessarily intractable. Despite the pretense of omnipotent scientific capabilities (which is now in obscene display with the AI craze) in fact our mental models for explaining things have not advanced very much after the 19th century.

Explaining the brain captivates our brains and we attribute special, even metaphysical signifance to it, but the fact is we havent explained any complex system.

In other words, how would you even explain "consciousness". In words? In mathematical equations? As a reproducible experiment? As inexplicable "behavior" in some cellular automaton?


The new Bing is banned from talking about its own nature in this respect. It will abruptly end the chat. I wonder if some front end software prevents Bing from seeing banned questions, or if it is simply unable to explore these questions about itself.


By implicitly assigning „it“ an identity, you’re simply anthromorphizing an LLM. Bing doesn’t see anything. It isn’t able to do anything. It’s a very convoluted and complex mathematical model of weighted terms that spits out text generated from a prompt. And nothing science-wise suggests anything else. People seem to keep forgetting that.


People also told me my pet couldn't think and couldn't feel pain when it died. It was only a convoluted and complex system of instincts.

I now know they told this story to me (as a child) to make me feel better. It was one of those things like Father Christmas or the Tooth Fairy to me.

Later when I studied animal behavior, I found out science-wise that animals behave in ways that strongly suggest that in fact they do think, are able to solve problems, and even recognize themselves as separate entities.

As we build systems that are capable of more and more sophisticated tasks previously classified as "cognition" , it wouldn't surprise me if these systems start to pick up some of these traits too, one by one. (I believe there might be some tentative empirical studies to that effect).

I realize that this is very different from a binary "Superior Human" vs "(biological) machine" point of view. It is more of a consciousness exists on a sliding scale point of view.


I agree with your sentiment that animals and maybe even plants can experience sentience. But that’s a far, far, faaaar cry from what ChatGPT is and is capable of. We don’t have “general” AI yet. ChatGPT’s intelligence is very narrow; it generates written content. That’s about it. I believe a lot of people are overhyping it this way.


I won't disagree that some people are over-hyping, but beware of counter-leaning too much in the other direction!

Generating written content is a pretty powerful tool, all things told.


Yes I agree it is a complex mathematical model. However I also seem to have a (far slower) pattern matching system in my head, that also spits out answers when prompted.


It's just GPT-4. Using it outside of Bing and it will gladly discuss the nature of consciousness and whether it has it with you.


You realize that it’s just a text completion model with a chat interface wrapped around it, right?


Bing and ChatGPT are both narrow AIs. They don’t have feelings or any real understanding of self. They simply generate text. You can ask it about itself, but anything it generates doesn’t have any real meaning, as it’s just making it up semi-randomly.


> Christof Koch wagered David Chalmers 25 years ago that researchers would learn how the brain achieves consciousness by now. But the quest continues.

Ehhhh the title is a click bait imho.


Agreed. The title makes it sounds like we've learned something about consciousness that somehow makes mainstream philosophical approach more correct than mainstream neuroscience approach. Instead, it's just about a neuroscientist making a stupid bet. I call it stupid because he's betting that he can predict than _something that never happened before in the history of mankind_ will happen in the next 10 years. That is very very unlikely.


The bet was in the next 25 years.

Which is just scientist for "eh, anything can happen in 25 years"

Whereas Chalmers' whole career long project is very much dependent on his "hard problem" and other neo-dualist ideas, so of course he would make this bet.

Pretty boring stuff as far as bets between academics go.


Though it is kind of exciting that the bet expired just as ChatGPT seems to provide new hints. I don't know if Chalmers would take the bet again.


You don’t get to complain about click bait while quoting part of the actual title. It’s not their fault HackerNews artificially limits title lengths; the subtitle would’ve been visible when shared on sites whose clicks actually matter like Twitter or Facebook


The point should have been the update of the theories, not some bet. That could have very well been conveyed in the title, but it wouldn’t garnish as much attention, would it?


Why is the title clickbait? What were you expecting - that the philosopher had proposed a model of consciousness that got experimental confirmation?


My immediate thought upon reading the title was, “there’s no way they have proven free will to be a thing”.

And follow up thoughts were along the lines of “did we uncover some fundamental truth of the brain?”.

I got flagged and cant respond, yet again.


What does free will have to do with this?


Honestly, yes. Or at least some semblance of progress in the field.


i’m so lost work these magical or straight away missing definitions of consciousness .

a dog is conscious right? what would be a minimal test for consciousness?

i’m annoyed that something has to be super human or else „it’s just statistics“. there have to be minimal definitions of reasoning, feelings, self reflection etc.

will we have the tiring chinese room discussion again where for some the emulation does not count because something does „not really“ do something?


The whole debate over what is and isn't conscious is foolish, if you ask me. It's essentially a proxy for what does and does not deserve ethical consideration. In that regard our desire to exploit the universe around us without consideration will bias our reasoning considerably. The debate around ML models that you are referencing is illustrative of this.

The much harder ethical quandary is how we should operate in a universe where we are not specially privileged, and as a species I think we are generally unwilling to consider such a thing.


I don't equate consciousness with ethics in any way. I measure consciousness as the ability to accurately understand ones environment. Each level of accurate understanding is distinct.

Ethics is a whole other department.


so gpt in the realm of text is in right? only sense it was given (excluding multi modal)


I have no idea. The only thing I know about gpt is what I read on HN. The rest of the comment is also hard to understand.


I always thought the first sensation was the separation of darkness from light. From there subtleties and exceptions consciousness grew ever more complex. Every time a difference was found a new layer of consciousness was developed. Today we can perceive so many differences and weigh their outcomes without fully understanding their roots.


The first sensation was probably chemical rather than light. Organisms long predate photosynthesis. The early earth had plenty of energetic molecules floating in the water.


I thought light was chemical? I think photosynthesis is a highly evolved process. Much higher evolved to be considered the beginning.


In this case I mean that the first sensor is probably something that responded to gradients of molecules in the water, so that the proto cell could move in the direction of more food.

So I'd say that the first sensation was smell/taste. Though that is, of course, speculative.


right but maybe it is photonic energy that created a disparity in pressure...


> what would be a minimal test for consciousness?

I think episodic memory and ablitity to simulate reality to plan are huge part of it.

For me, whatever dreams is conscious.


so gpt is in for you, right?


Nope. I never saw it sleeping, let alone dreaming.


Seems like a bit of an optimistic bet even back in the day. I wonder what time scale they might put on achieving the same goal today?


Godel, Escher, Bach was almost 20 when they made this bet, and it is still the most influential piece I've read on the possible mechanisms behind consciousness.

Not only would I say we haven't solved anything, an argument could be made we moved further in the 50 to 25 years range than we did in the last 25.


> Consciousness is everything a person experiences — what they taste, hear, feel and more. It is what gives meaning and value to our lives, Chalmers says.

> Despite a vast effort — and a 25-year bet — researchers still don’t understand how our brains produce it, however. “It started off as a very big philosophical mystery,” Chalmers adds. “But over the years, it’s gradually been transmuting into, if not a ‘scientific’ mystery, at least one that we can get a partial grip on scientifically.”

I'm sure these two scientists have interesting things to say that I never thought about.

But I stopped reading the article at this point, because that's where I expected a precise definition of consciousness. That first sentence isn't one as far as I am concerned.

Philosophy without strict definitions and a common language tends to be useless. And I love philosophy and consider it the root of all science.


Near I can tell all I can find is wikipedia paraphrasing him paraphrasing someone else. "the feeling of what it is like to be something" sounds like to me a lot of words to just say "self awareness". Now I'm ignorant of all of this fair warning, but that seems pretty dang well explained isn't it? went and looked for brain science and awareness things, but it seems like there's lots of explanations for lots of parts of it. Same way we got lots of explanations for lots of stuff in science near as I know, so how the hell does that mean "neuroscience loses"? Like if we don't got a perfect explanation of everything science don't count?

Anyhow I went and read the article but it didn't even explain what isn't explained like you said where there just wasn't a meaning that was laid out. Got the feelin "it isn't perfect so I win" which is a really cheap trick and a really dumb bet. But I don't really know this stuff so all I can say is "to someone who doesn't know this stuff this article didn't help a dang bit". Maybe someone can explain the dang thing to me better


You can look the definition of qualia or phenomenal. There is a ton of philosophical discussion. But basically, it means the experiences of color, sound, etc. that humans have, and whatever additional kinds of sensations animals with different sensory organs (like sonar in bats) have. Which doesn't preclude other sorts of physical systems from being conscious, just depending on what you think results in conscious experiences.


I agree with your definition, but also wonder, isn't it kind of self-referential? I almost get an "Ignotum per ignotius" (or "Ignotum per æque ignotum") [1] feel reading this definition.

[1] https://en.wikipedia.org/wiki/Ignotum_per_ignotius


Consciousness is something some beings experience. We have direct knowledge that it exists because we experience it. We can assign a word to this experience, but we can't define it objectively such that a non-conscious being would understand it. That doesn't mean it's useless to talk about this real experience we have. It just means we can never be entirely sure we're talking about the same thing.


> We can assign a word to this experience, but we can't define it objectively such that a non-conscious being would understand it. That doesn't mean it's useless to talk about this real experience we have.

That's undeniably true and I feel somewhat sorry having used the term "useless" in my comment, then again I don't.

The best teacher I had in school days was a philosophy Ph.d. who pivoted to middle and high school teaching for a while (then quit it after a couple of years). His favorite subjects included Thomas Aquin and Plato.

When I say "precise definitions", I don't claim to understand the argument wholesale, nor did I want to dismiss talking about "subjective" experience. I have deep respect for philosophy as the non-specialized root of all science, as I said. This includes thinking about what my scare-quoted qualifier "subjective" means.

My comment was meant to be a half-assed critique justifying why I didn't RTFA, after having it half-read.

Importantly (even if half-assed), a critique of the article, not its subject matter.

It just felt like a logical omission and therefore an impleasant reading flow for a pop sci article.

If there had been an introductory sentence to that paragraph to qualify the definition, I wouldn't have commented.


Not sure if anyone lost on this. Consciousness has already been discovered in modern AI algorithm (especially in Google's). Just needs to be published.


I really don't see how this definition of consciousness in humans differs from that of consciousness in a bee, or a jellyfish, or a squirrel.


My guess — and this is a guess, not an assertion — is that the hard problem is one of physics and not cognitive science or neuroscience per se.


So panpsychism, property dualism, neutral monism or idealism?


I think what I'm thinking is usually closer to property dualism although I think those positions are similar enough they all make sense to me as possibilities in some form, to the extent I remember them correctly.

I've generally convinced myself at times that consciousness in the hard sense of qualia is probably related to physical information over time, and might reflect potential physical properties of all physical entities.


'Consciousness' is about as academically stringent as medical humorism, so Chalmers has found himself a nice free wine supply.


If you're saying that a good definition of consciousness does not exist within medicine or biology, then that is (IMO) one way of stating the "hard problem"[1] of consciousness that Chalmers famously posited.

Another way of putting it might be that there is simply no coherent scientific framework within which consciousness is well defined. It seems to have no place in any science-based ontological scheme. It seems outside any established scientific categorization of what exists.

Which is of course what makes consciousness so interesting!

1: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


A bit like the gods, in that respect.

Maybe Consciousness doesn't actually exist.


You doubt that you have experiences?


You doubt the existence of thunder and lighting? But surely those prove the might of the great Jupiter!

All joking aside : Of course I'm not particularly married to the position that "Consciousness doesn't exist". But it seems a bit suss that it has been so ill defined for so long. Some days I really do wonder if maybe it belongs in the same pre-scientific category as Vis Vitalis or Luminiferous Ether.


The link between the thunderstorms and Jupiter is theoretical, but our having conscious experience is directly given.

The fact that something for which, unlike phlogiston or the Aether, we have direct incontrovertible evidence (denial of which evidence puts one on pain of having to justify one's own sanity and fitness for discourse), has yet so eluded definition in strict functional / scientific terms, points more to the likelihood we're suffering from some collective conceptual confusion, afaics.

Rather than consciousness being a primitive pre-scientific intellectual construct, perhaps our current scientific / functional ontology will later be regarded as pre-X, where X is the name for whatever resolution of the aforementioned conceptual confusion we achieve such that it allows us to integrate the fact of consciousness into a unified rational account of the universe.


For my statistics, suddenly I really want to know what your (educational/professional) background is. Are you a philosopher, or a psychologist, or a sociologist, or ?

(I'm coming from biology, but that's a big field. Narrowing it down a bit: an eclectic mix of ethology, neurophysiology and bioinformatics)


Your statistics?! Please elaborate! You are exploring possible links between humanities engagement and pathologically convoluted sentence structure, eh? :)

But anyway, my bachelors degree was a "combined major" of comp. sci. / philosophy, several decades ago. I still have a strong interest in philosophy of mind but not in any professional capacity. Hope it helps.


I didn't actually keep track of complexity of sentence structure as a function humanities engagement level yet, but... good idea!

According to my ad-hoc statistics, basically the position on consciousness breaks down as follows:

1. Physics/chemistry: "We don't know what Consciousness is, but let us propose a new particle-of-the-day to explain it"

2. Humanities: "We don't know what Consciousness is, but let us propose a new field of science/state of being to explain it"

3. Life sciences: "We don't know what Consciousness is. The last umpteen-thousand papers explained things in terms of known physics and chemistry, so let's explain Consciousness in terms of known physics and chemistry"

4. IT/AI: "We're all gonna die!" (bonus points for being technically correct at all times)

5. Philosophy of Mathematics <- mathematicians are sort of crazy, philosophers are sort of crazy; crazy^2 might just have some sane(?) words on this. (Alan Turing was one of these, and see what kind of trouble HE kicked up!)


Haha, thanks for the reply, just seen this. Nicely summarized!

I guess I'd come under Humanities as far as my interest in consciousness goes, but I'm more "We don't know what Consciousness is, but let's at least stop trying to pretend it's nothing but X (or Y, or Z)" (Aimed mostly at groups 1 & 3 by your schema, I guess).


I'm technically in group 3, of course, and aware of it.

In defense of "my side" though, I think you're responding to a straw man/stereotype a little bit. (said the person who just listed 5 stereotypes)

For one thing, I tell people to stop saying 'nothing but'. Humans are not 'nothing but' monkeys. Do You Know How Smart Monkeys Are? I say it's an honor to be counted among their kind!

Possibly the other stereotypes have some grain of sanity to them in the same way.

Except Philosophy of Mathematics. Tripping without LSD, that's what they are. And probably wouldn't even deny it when challenged.

Thanks for answering, and have a great day!


The whole point was that someone would have made it stringent by now. It's clear that didn't happen.


conciousness is defined by memory


Damn, what is this? A college dorm room?


This title is stupid.


Trying to understand how the 86 billions of neurons in humans achieve consciousness is like trying to understand how the billions of weights in ChatGPT interact with each other.

Basically impossible impossibility. For a system which derives its behavior from complex interactions between those billion components, you can only understand its origins how it was tuned, and some high-level concepts of its workings. (Which we already achieved both for human brain and ANN).

Not sure what the neuroscientists are even researching at this point, has there been any major findings from neuroscience in the last 10 years? (With similar kind of impact as Transformers in 2017?)


Neuro research is diverse, just like any field.

In my neck of the woods, intracellular stuff, some of the large findings are (in no particular order):

-Cilia regulate hormonal changes (thing we thought did nothing, does a lot)

-Astrocytes participate in non-electrical modulation of the synapse (it's not just electricity you have to worry about now)

-CLARITY, just in general

-Opsins and light based stimulation of the neuron (use light to make them fire)

-Just all the crazy shit from CRISPR-CAS9

Anyone in other fields of neuro, please chime in.


>... trying to understand how the billions of weights in ChatGPT interact with each other.

Isn't that just a computational problem? In theory, couldn't every expression and result be debug.print and followed? Indeed, in theory couldn't a second chatGPT process follow the debug.print process of the first chatGPT process and then explain to humans how the result of the first process was derived?


We can ask it complex questions and see how the neurons activate similar to an MRI. I bet someone in OpenAI is doing it right now.


Be very careful with MRI studies : https://www.wired.com/2009/09/fmrisalmon/



No, it's Egocentric idiot #1 vs Egocentric idiot #2 . Please don't lump all neuroscientists and philosophers with them




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: