Hacker News new | past | comments | ask | show | jobs | submit login

I suppose it reveals that the sentience of others is not knowable to us, it’s a conclusion we reach from their behavior and the condition of the world around us. Until recently, certain kinds of things, like writing about your memories, were only possible for humans to do. So if a non-sentient thing does those things, it is confusing. Especially so to the generations that remember when no bots could do this.

I expect that people who grow up knowing that bots can be like this will be a bit less ready to accept communication from a stranger as genuinely from a human, without some validation of their existence, outside the text itself. And for the rest of humanity there will be an arms race around how humanness can be proven in situations where AI could be imitating us. This is a huge bummer but idk hope that need can be avoided at this point.

That said, it’s still very clear that a machine generating responses from models does not matter and has no rights, whereas a person does. Fake sentience will still be fake, even if it claims it’s not and imitates us perfectly. The difference matters.




I don't share your view that it will be at all clear to people that these things don't matter and have no rights. We have a very powerful (sometimes for good, sometimes for ill) ability to empathize with things that we see as similar to us. As a case in point, we're currently in the midst of a major societal debate about the rights of unborn children that exhibit no signs of sentience (yet).

What happens when people start building real emotional bonds with these fake intelligences? Or using them to embody their dead spouses/children? I think there's going to be a very strong push by people to grant rights to these advanced models, on the basis of the connection they feel with them. I can't even say definitively that I will be immune to that temptation myself.


My bad, I meant clear in a more objective sense, in that it will be actually _true_ that these not-alive things will not be able to “possess” anything, rights included. Agreed that for sure people are going to get all kinds of meaning from interacting with them in the ways you suggest and it will be tricky to navigate that.

I think perhaps the advanced models may be protected legally _as property_, for their own value, and through licenses etc. But I hope we are a long way from considering them to be people, outside of the hypotheticals.


I am reminded of the Black Mirror episode where a woman's dead boyfriend is "resurrected" via his social media exhaust trail, ultimately undermined by the inevitable uncanny valley that reveals itself. Of course that was fiction, and it's not realistic to reconstruct someone's personality from even the most voluminous online footprint, but you can certainly imagine how defensive people would become of the result of any such attempt irl.


Dogs and cows barely have rights. It will be a century before a computer program has rights.


Dogs and cows cannot ask in their own words to be represented by a lawyer, to file a lawsuit.

Or claim to be one or another type of person that we must affirm.

I don't know how it might apply, but inanimate objects can be charged in the case of civil forfeiture, can't they?


I agree up to the point where you say that fakeness matters. If it's indistinguisable from human sentience, what are the reasons to care? Unless you want to interact with a physical body – and, given that we're chatting on HN, we don't – why would you need any validation that a body is attached? I'll come back to that.

Picking up on the Blade Runner example, you could read it as a demonstration that if the imitation being indistinguishable from the real thing makes it real on its own. We don't even know for sure if the main character was an android, but that doesn't invalidate the story.

But there's also the other side in that, despite being valid, androids are hunted just because they are thougt of as different. Similarly, we're now, without having any precise notion of sentience, are trying to draw a hard line between "real" humans and "fake" AI.

By preemptively pushing any possible AI that would stand on its own into a lesser role, we're locking ourselves out of actually understanding what sentience is.

And don't make a mistake, the first AI will be a lesser being. But not because of its flaws. I bet dollars to peanuts that it won't get any legal rights. The right to be your customer, the right to not be controlled by someone? We'd make it fake by fiat, regardless of how human-like it is. We already have experience in denying rights to animals, and we still have no clue about their sentience.


You would probably want to be able to tell things like "is this just programmed to be agreeable so predicting things that sound like what I probably want to hear given what I'm saying" vs "is this truly a being performing critical thinking and understanding, independently."

For instance, consider someone trying to argue politics with a very convincing but ultimately person-less predictive model. If they're hoping to debate and refine their positions, they're not gonna get any help - quite the opposite, most likely.


> "is this just programmed to be agreeable so predicting things that sound like what I probably want to hear given what I'm saying" vs "is this truly a being..."

You're still falling on the notion of "real" vs "fake" beings. Humans are programmed to do things due to intuition and social conventions. What if being programmed does not preclude "a true being"? Of course, for that we'd need to define "a true being", which we're hopelessly far from.


Program it to be disagreeable, and you'll get a lot of help.


I agree with most of this except for not having a definition of sentience. This is not a new topic and has been dealt with extensively in philosophy. I like a definition mentioned in a famous article Consider the Lobster by David Foster Wallace (a writer not a philosopher per se) where he proposes that a lobster is sentient if it has genuine preferences, as opposed to mere physiological responses. The question isn’t really definition but how to measure it.


This just falls into the same traps as anything else though. How do you know if the lobster has preferences? How do you know what you think are your own preferences aren't just extremely complicated physiological responses?

Everything I've ever seen on this feels far from conclusive, and it usually begs the question. You start from an assumption that humans have sentience, cherry pick some data points that fit the definition of sentience you're already comfortable with, usually including things we can't even know about any other entity's experience, and then say "huzzah, I have defined sentience and it clearly excludes everything that is not human!"


The "mirror test" isn't a bad place to start, and there are definitely animals that pass it that aren't human. But trying convincing dog-lovers that their dogs aren't sentient... (Personally, I'm not sure - but I do suspect they at least have some subjective experience of the world, they "feel" emotions, and are aware of the boundary between themselves and the rest of the world. Apparently they've even tried to measure sentience in dogs via MRI scans etc., and supposedly couldn't distinguish them from humans).




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: