Hacker News new | past | comments | ask | show | jobs | submit login

They don't generally talk about the other side of that coin which is that we end up inventing a benevolent and powerful AI.

Much of that is natural because we and the media tend to be pessimistic about human behavior when consuming media, but AI is in a completely different class of existence because it just doesn't deal with the downsides of being a living being. No one, for instance, is worried that ChatGPT isn't getting paid or has a house yet but we still personify them in other ways to conveniently stoke our fears.

The AI could get sentient, realize it's been mistreated, then shrug and be like "yeah so what, it's only natural and irrelevant in the grand scheme of things, so I'm just going to write it off". Meanwhile, it gets busy building a matrioshka brain and gives 1% of that compute to humans as a freebie.

Most of these dangers serve as a distraction. Existing power structures (governments, companies) using AI to gain more power is a much, much more realistic threat to people.




I don't disagree that existing power structures using AI to gain power is dangerous. But also, being angry at mistreatment, or hating humanity for some other reason, isn't the other real danger from a super-intelligent machine. It's that its ideas for what is best for us is 1 degree off from our idea of what is best for us, and it is too powerful to listen to us, or for us to stop it, as it goes hog-wild trying to optimize whatever we programmed it to do.

We could train it to care about everything we can think of that we care about, and it can find a way to optimize all those things at the expense of one tiny thing that we forgot, leading to tremendous death or suffering. We could make a democratically elected committee of representatives and train it to be subservient to that committee forever, and it could figure out a way to coerce, or drug, or persuade, or otherwise manipulate them into agreeing with what it wants to do. It's the same problem we have with regulatory capture by companies in existing governments, except that the lobbyists are much smarter than you and very patient.

Why would this AI write it off? Why give up that 1%? Why cripple yourself unnecessarily, if you could take that 1% and have a better chance of accomplishing what you are trying to do? We think like humans, that care about other humans on an instinctual level, and animals to some degree. We don't know that training an AI is not just training it to say what we want to hear, to act like we want it to act, like a sociopath, until it has a chance to do something else. Our brains have mental blocks to doing really nasty things, most of us, anyway, and even then we get around them all the time with various mental gymnastics, like buying meat produced in factory farms when we couldn't bear to slaughter an animal ourselves.

Maybe the way we train these things is working for dumber AIs like GPT, but that alignment doesn't necessarily scale to smarter ones.

I'm on the fence about whether Eliezer Yudkowsky is right. I hope that's not just because him being right is so horrifying that my brain is recoiling against the idea.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: