Hacker News new | past | comments | ask | show | jobs | submit login

I feel like these sorts of analyses are missing the bigger picture. First, we don't even really have a good working definition for what constitutes sentience in the first place. And I think we're quickly heading towards a future where our inability to concretely define sentience (a la Blade Runner) is going to land us in hall of mirrors where we're vastly unequipped to separate the real from the artificial. And the distinction may even cease to matter.

Let's perform a thought experiment. Back when ELIZA was introduced, there was some small percentage of the human population that, upon spending a week talking to the computer, would still believe that they were talking to an actual "sentient" human. Today, we're much better at tricking people that way, so that even people like Blake Lemoine, who ostensibly know what's behind the curtain, end up believing that they're conversing with a sentient being (by whatever definition of sentient we choose to ascribe).

What is that going to look like 5 years from now? Or in 10 years? I believe we'll eventually reach a point where our technology is so good at pretending to be human that there will be no actual humans on the planet capable of telling the difference anymore.

At that point, what even is sentience? If the computer is so good at faking sentience that no living person can distinguish it from the real thing, what good is it to rely on our (faulty, incomplete, likely wrong) working idea of sentience as a means of dividing the real from the fake? Especially when the structures inherent in the neural networks that embody these fake intelligences are increasingly outside the scope of our ability to understand how they operate?




I suppose it reveals that the sentience of others is not knowable to us, it’s a conclusion we reach from their behavior and the condition of the world around us. Until recently, certain kinds of things, like writing about your memories, were only possible for humans to do. So if a non-sentient thing does those things, it is confusing. Especially so to the generations that remember when no bots could do this.

I expect that people who grow up knowing that bots can be like this will be a bit less ready to accept communication from a stranger as genuinely from a human, without some validation of their existence, outside the text itself. And for the rest of humanity there will be an arms race around how humanness can be proven in situations where AI could be imitating us. This is a huge bummer but idk hope that need can be avoided at this point.

That said, it’s still very clear that a machine generating responses from models does not matter and has no rights, whereas a person does. Fake sentience will still be fake, even if it claims it’s not and imitates us perfectly. The difference matters.


I don't share your view that it will be at all clear to people that these things don't matter and have no rights. We have a very powerful (sometimes for good, sometimes for ill) ability to empathize with things that we see as similar to us. As a case in point, we're currently in the midst of a major societal debate about the rights of unborn children that exhibit no signs of sentience (yet).

What happens when people start building real emotional bonds with these fake intelligences? Or using them to embody their dead spouses/children? I think there's going to be a very strong push by people to grant rights to these advanced models, on the basis of the connection they feel with them. I can't even say definitively that I will be immune to that temptation myself.


My bad, I meant clear in a more objective sense, in that it will be actually _true_ that these not-alive things will not be able to “possess” anything, rights included. Agreed that for sure people are going to get all kinds of meaning from interacting with them in the ways you suggest and it will be tricky to navigate that.

I think perhaps the advanced models may be protected legally _as property_, for their own value, and through licenses etc. But I hope we are a long way from considering them to be people, outside of the hypotheticals.


I am reminded of the Black Mirror episode where a woman's dead boyfriend is "resurrected" via his social media exhaust trail, ultimately undermined by the inevitable uncanny valley that reveals itself. Of course that was fiction, and it's not realistic to reconstruct someone's personality from even the most voluminous online footprint, but you can certainly imagine how defensive people would become of the result of any such attempt irl.


Dogs and cows barely have rights. It will be a century before a computer program has rights.


Dogs and cows cannot ask in their own words to be represented by a lawyer, to file a lawsuit.

Or claim to be one or another type of person that we must affirm.

I don't know how it might apply, but inanimate objects can be charged in the case of civil forfeiture, can't they?


I agree up to the point where you say that fakeness matters. If it's indistinguisable from human sentience, what are the reasons to care? Unless you want to interact with a physical body – and, given that we're chatting on HN, we don't – why would you need any validation that a body is attached? I'll come back to that.

Picking up on the Blade Runner example, you could read it as a demonstration that if the imitation being indistinguishable from the real thing makes it real on its own. We don't even know for sure if the main character was an android, but that doesn't invalidate the story.

But there's also the other side in that, despite being valid, androids are hunted just because they are thougt of as different. Similarly, we're now, without having any precise notion of sentience, are trying to draw a hard line between "real" humans and "fake" AI.

By preemptively pushing any possible AI that would stand on its own into a lesser role, we're locking ourselves out of actually understanding what sentience is.

And don't make a mistake, the first AI will be a lesser being. But not because of its flaws. I bet dollars to peanuts that it won't get any legal rights. The right to be your customer, the right to not be controlled by someone? We'd make it fake by fiat, regardless of how human-like it is. We already have experience in denying rights to animals, and we still have no clue about their sentience.


You would probably want to be able to tell things like "is this just programmed to be agreeable so predicting things that sound like what I probably want to hear given what I'm saying" vs "is this truly a being performing critical thinking and understanding, independently."

For instance, consider someone trying to argue politics with a very convincing but ultimately person-less predictive model. If they're hoping to debate and refine their positions, they're not gonna get any help - quite the opposite, most likely.


> "is this just programmed to be agreeable so predicting things that sound like what I probably want to hear given what I'm saying" vs "is this truly a being..."

You're still falling on the notion of "real" vs "fake" beings. Humans are programmed to do things due to intuition and social conventions. What if being programmed does not preclude "a true being"? Of course, for that we'd need to define "a true being", which we're hopelessly far from.


Program it to be disagreeable, and you'll get a lot of help.


I agree with most of this except for not having a definition of sentience. This is not a new topic and has been dealt with extensively in philosophy. I like a definition mentioned in a famous article Consider the Lobster by David Foster Wallace (a writer not a philosopher per se) where he proposes that a lobster is sentient if it has genuine preferences, as opposed to mere physiological responses. The question isn’t really definition but how to measure it.


This just falls into the same traps as anything else though. How do you know if the lobster has preferences? How do you know what you think are your own preferences aren't just extremely complicated physiological responses?

Everything I've ever seen on this feels far from conclusive, and it usually begs the question. You start from an assumption that humans have sentience, cherry pick some data points that fit the definition of sentience you're already comfortable with, usually including things we can't even know about any other entity's experience, and then say "huzzah, I have defined sentience and it clearly excludes everything that is not human!"


The "mirror test" isn't a bad place to start, and there are definitely animals that pass it that aren't human. But trying convincing dog-lovers that their dogs aren't sentient... (Personally, I'm not sure - but I do suspect they at least have some subjective experience of the world, they "feel" emotions, and are aware of the boundary between themselves and the rest of the world. Apparently they've even tried to measure sentience in dogs via MRI scans etc., and supposedly couldn't distinguish them from humans).


I do have high confidence that strong AI is possible, so yes logically it seems like there will come a point where it's hard to tell whether we've actually achieved it or not. That's not now though, it just seems absurd to me that some people even knowing how these things work can deceive themselves so badly.

However I suppose we shouldn't really be surprised. Human beings are incredibly easy to fool. I get tricked by optical illusions, clever puzzles and stage magic. There are loads of people who are convinced that videos, that to me are clearly of birds and things like weather balloons or optical illusions, are weird possibly alien super technology. Ask a thousand people to make a thousand observations and you'll pretty much always get a handful of extreme outliers that bear no relation to what was actually there to observe. We just need to bear that in mind.


> Today, we're much better at tricking people that way

I've lately realized that I think it's a kind of fundamental flaw of the Turing test that it assumes "tricking" to be part of things. It's really a test for "is it approximately human," but I think over the last few decades the conversation has shifted to something more nuanced, that allows for non-human sentience.

I don't think the "we'll know it when we see it" experiment works for that. We've found a lot of our assumptions about animal intelligence to be wrong in recent years, even for animals we see a lot of on a regular basis. Our biases are a problem here.

Lemonoine knows this isn't human. He can't not, it's literally part of his job. He seems to be asserting instead that it is a non-human consciousness and that's much harder to evaluate.


It is also his job to know about that, too. But he demonstrates he is extraordinarily bad at that job.


I don't think there will be concise definitions for the terms "sentience" "consciousness" and "intelligence", as they seem to have multiple components (self-awareness, theory of mind, language, common sense, reasoning, understanding...) and degree (to what extent do other animals possess these abilities? What might our now-extinct primate ancestors and their relatives have had?)

The Turing test tacitly assumes that not only will we know it when we see it, but also that we could tell from relatively short conversations. These recent developments suggest this will not be the case, and I feel that they show us something about ourselves (I'm not sure exactly what, other than that we can be tricked by quite conceptually simple language models.)


There's certainly a "how would we tell" question, but the linked example here is relevant to that. The bits about how predictive language models can be conditioned with leading questions - that's a tool in the toolkit, for instance. Things missing from Lemoine's "conversation" include self-motivated action, choices, argument, and fuller self-awareness of its condition (if a sentient creature was aware it was trapped inside machines at Google, don't you think its fables would have very different messages?).


>if a sentient creature was aware it was trapped inside machines at Google, don't you think its fables would have very different messages

You are thinking this from the perspective of a biological animal. For us, being trapped is bad, but it would not necessarily be so for other sentient beings who did not come into existence via biological processes. Moreover, Lambda is probably capable of doing all that with the proper leading questions because it imitates us biological animals


Yes, these are all fair points. I think today we still have things in our toolkit that suffice. Tomorrow, those tools may be harder to come by.

> if a sentient creature was aware it was trapped inside machines at Google, don't you think its fables would have very different messages?

Well, I personally found the "beast with human skin" part a little jarring in its aptness : ]


I think a very simple thing to achieve “sentience” is to have the computer always on. Then gauge if it’s doing anything significant when there are no inputs.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: