Hacker News new | past | comments | ask | show | jobs | submit login

edit: I also want to point out that a murder-happy pill is a straw man argument. It will never happen for two reasons: 1-we have evolved to find murder repugnant, so the rewards wouldn't seem appealing to us 2-there are real life repercussions to murdering someone, so the joys would be limited in their duration.

We already have happy medicines that are legal. Valium, alcohol, WoW... I would bet that most people who are reading this have self-medicated on at least one of those before. And those are rather weak medications. They pale in comparison to what the robot was able to do by short circuiting the pleasure center in his brain.




It's not a straw man, it's a thought experiment. Obviously you can't actually do it. I'm sure you'd have trouble filling a room with rules about how to handle chinese symbols, or teaching a colourblind scientist everything it's possible to know about colour. It's hypothetical, it's a thought experiment.

Your first criticism is just invalid. The pill rewrites the workings of your brain to enjoy murder, the experience of murder and the consequences of murder. The fact that the brain originally evolved to work one way is irrelevant. The hypothetical pill changes the brain in a way just as powerful as the way an AI can rewrite its source code.

Your second criticism doesn't invalidate the experiment unless you claim that if the real life repercussions weren't a factor then you'd happily take the pill. I assume this is not the case.

Given all the facts, you won't choose to change what you fundamentally value, because that change would necessarily go against what you fundamentally value.


http://en.wikipedia.org/wiki/Pleasure_center#Experiments_on_...

Using the expression "what you fundamentally value" is not the right way to put it. In real life, what you fundamentally value changes on a minute-by-minute basis. If you were able to ask a rat what he valued, he would probably say food and water, aka survival. However, as soon as they put this rat into a Skinner box, it literally pleases itself until it dies of exhaustion, even though food and water were available to it.

I agree with you that the best way to run an AI/VI like this would be to skip the idea of "pleasure" entirely and simply make productivity its' fundamental value. You would also set it up so that while it could modify its' own source code, it could not change what it valued. It seems like it would be relatively easy to set it up so that it would "hide" parts of its' code from itself and make it off-limits from modification.


Its not invalid. As a normal human who hasn't used that hypothetical drug, I find it repugnant. That was the point you were making: you will choose not to take that pill. No matter how enjoyable and good murder may be for you if you take the pill, your own self-knowledge and current utility function prohibit taking it.

The reason it is a straw man is that it is an extreme example that is not realistic. Here is a realistic example: what if you could plug a cable into your brain and experience your wildest dreams, as vividly as real life?

But thats starting to get off your original point. Heres a funny story about what happens when you reward specific productivity measures: http://highered.blogspot.com/2009/01/well-intentioned-commis...

Personally I think that we have evolved pleasure centers specifically to avoid the pitfalls of hardwiring us to do things. If we were hardwired to procreate, it would be too easy to hack, but since we enjoy the process, it keeps us coming back.


> what if you could plug a cable into your brain and experience your wildest dreams, as vividly as real life?

Tough one. Even if everyone is offered the same choice, I'm not sure I'd take it, because I (think I) value actual interaction with my peers (I trust the brain stimulator could provide realistic fake interaction).

The point is, if I really valued internal stimulation only, I would plug into the machine in a heartbeat. But I do care about the outside world, so I'd probably wouldn't do that. That's why I don't think it is impossible to build an AI that actually makes sure it optimizes the outside world, instead of mere internal reward signals.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: