Hacker News new | past | comments | ask | show | jobs | submit login

Computers are somewhere between humans and bacteria on the conscious scale. Biological or mechanical are just two different ways to shuttle electrons around.

I will proudly stand up for the rites of computers as citizens of this country when they exhibit significant signs of ability to choose their own course and have opinions.

The computers will be our children, they will colinate the galaxy, and if we are lucky we can subscribe to the experience streams.




Not that it really invalidates your point in any way, but for the sake of accuracy I note that the electrical currents in our neurons are not made of electrons, but rather atomic ions, such as sodium and potassium, and that communication between neurons happens (mostly) via neurotransmitters, which do not really carry current at all (or do so only incidentally and over short distances).

I guess you could say "two different ways to shuttle information around", but even that isn't quite right in the biological case. The way information is stored and retrieved in our brains is quite different from the way it is stored in today's computer memory, as demonstrated by the recent article posted on HN talking about how accessing a memory in one's brain involves recreating it from scratch, modifying it in the process. But that's not to say that we can't create a computer that more accurately reproduces the mechanisms of our brains, or that doing so is necessary to create a conscious computer.


we could also say we dont communicate with current but with signals. electric charge flow is rather different thanelectronflow.

there is indeed still debateabout whether our nervous system is fundamentallyelectric..... its certainly part of it, and cambeused to influenceit, but there are other chemicals and steuctures at work as well, as you say.

the point of theturing test is really to say that if you cant tell whether you are talking to a man or machine, and you believe a man is sentient, the you must assumethe machineis as well.... we haveno other mechanism to decide this. we caneven debate how we canprove ourselves to. ese tient.


It's duck typing for sentience.


Our individuality comes from the fact that we have a body. I don't think there will be AI citizens but rather one AI that will quickly become knowledgeable about everything. It'll probably be able to manipulate any human being. How would you deal with this ?


I don't think this body theorem holds. Even with todays machine learning methods you can get two different results if you start two different algorithms. They will learn different things.

Unless of course you postulate that there will be only one AI, because all instances will be networked with each other.


Are you saying that any true AI that we created would be identical to any other? Or that the first AI we created would subvert human society before we had a chance to create a second one?

To take a different tack, I don't see how having a physical body has anything at all to do with being an individual. Certainly in our case out physical bodies are part of what makes us individuals, but what's to say that the same AI instantiated twice with different random seeds each time wouldn't produce two distinct individuals?


I'm not sure that will ever be possible.

Consciousness produces logic as a tool to use. Logic does not produce consciousness.

Computers, by definition, are pure logic and rules. You can use logic to mimic consciousness, but nothing more.


> Logic does not produce consciousness.

Considering we have no falsifiable theory about what does produce consciousness, I don't see how you could possibly claim this.

You can't prove that you are conscious, nor can anybody else. I can perceive my own consciousness but I can't rigorously explain it. We have no way of determining if this alleged "consciousness" is a spectrum or binary. We can't test which life-forms are conscious and which are not. We can observe behaviors in animals that seem to imply consciousness, but is a dog conscious, or an insect, or an amoeba? We don't know.

And furthermore, if you believe in evolution, you have to believe that there was no consciousness and then consciousness at some point was created where there was none before. If logic doesn't produce consciousness, what does produce it? And whatever produced it in cellular tissue millions of years ago, who's to say we couldn't likewise produce it in a die of silicon, which like our brains is highly electrical?


> And furthermore, if you believe in evolution, you have to believe that there was no consciousness

Actually, you don't http://en.wikipedia.org/wiki/Panpsychism


Just because you have simple rules doesn't mean you can't get complex behaviors. Multi-threaded applications using locks and mutexes have bugs that never happen the same way twice, which is why they're so notoriously hard to debug. Large systems with lots of parts (like huge websites with multiple backends) will exhibit failure behavior we can't predict even though at the very basic level, all computers are predictable. It's only manageable at that scale because we work hard to design it to be predictable.

There are a lot of complex and emergent systems that demonstrate this property of following simple rules but exhibiting complex behaviors. Conway's game of life, rule 110 in basic cellular automata, how ants work together, fireflies blink in synchrony without global communication are all examples.

And while none of these demonstrate that computers can produce consciousness, they give a hint that computers may be able to do so, despite it following rules to the T.


Are brains neural connections are equivalent to that of a computers software.

If our brains can produce consciousness there is no physical reason we know of why a computer can't be conscious. How do you know if everyone around you is conscious? You just assume they are, because you are. In reality you have no way of telling.

The interesting thing about this post is that the robot believes it is human, because it believes it is conscious. If it acts in a way that we recognise as conscious and tell us it is, at this point we have just as much evidence for believing the robot is conscious as we do for every human.


It remains to be seen if consciousness is even useful if your goal is to maximize intelligence. I could go on in more depth but only if someone cares to hear my thoughts on the topic. But I will leave this:

What do Expertise, http://en.wikipedia.org/wiki/Flow_(psychology) and the advise to go for a walk or sleep on a problem have (or downplay) in common?

http://en.wikipedia.org/wiki/Benjamin_Libet#Implications_of_...


What you are talking about is called a "philosophical zombie". http://wiki.lesswrong.com/wiki/Philosophical_zombie (I strongly recommend you follow some links.)


> use logic to mimic consciousness

And AFAIK, we don't even know if it's computable.


if that mimicry is indistinguishable from a human, then by definition it is as sentient as a human.... thatsthepoint.


How do you know any of this is true?


YOU are pure logic and rules, come on over to my house, I will slice your brain up into disks a few molecules thick, and create a 3D molecular model of your brain. What I will find is not magic in meat, what I will find is vast arrays of interconnected systems, which is no different than had I sliced up your computer and made a 3D model of that.

I find your superstition disturbing.


Consciousness produces logic as a tool to use. Logic does not produce consciousness.

This statement is not proved (and maybe not provable at all, not to mention probably untrue).

Also, have you considered the possibility a THIRD factor besides logic/consciousness (like information complexity, feedback loops etc) to produce BOTH consciousness and logic?

Computers, by definition, are pure logic and rules. You can use logic to mimic consciousness, but nothing more.

Who proved that, when and how? This is just a circular (vicious) argument, especially as denoted by the "by definition" part.


Well, one, I could argue that the reason for our human consciousness is our experiences. Which a computer does not have.

Second, "just two different ways to shuttle electrons around" would hold true if information exchange was the only thing that mattered. Which could be true, but what if the actual chemical etc substances used are also important? Like how you can model audio (say, a WAV file), but you cannot model actual sound.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: