Hacker News new | past | comments | ask | show | jobs | submit login

> The "single most economically important task" that a machine which can operate at a human (or superhuman) level, is "make a better version of itself" until that process hits a limit, followed by "maximise how many of you exist" until it runs out of resources.

Lot of hidden assumptions here. How does "operating at human level" (an assumption itself) imply the ability to do this? Humans can't do this.

We very specifically can't do this, we have sexual reproduction for a good reason.

(Also, since your scenario also has the robots working for free, they would instantly run out of resources to reproduce because they don't have any money. Similarly, an AGI will be unable to grow exponentially and take over the world because it would have to pay its AWS bill.)

> If I imagine a world where every task that any human can perform can also be done at world expert level — let alone at a superhuman level — by a computer/robot (with my implicit assumption "cheaply"), I can't imagine why I would ever choose the human option.

If the robot performs at human level, and it knows you'll always hire it over a human, why would it work for cheaper?

If you can program it to work for free, then it's subhuman.

If you're imagining something that's superhuman in only ways that are bad for you and subhuman in ways that would be good for you, just stop imagining it and you're good.




> Lot of hidden assumptions here. How does "operating at human level" (an assumption itself) imply the ability to do this?

Operating at human level is directly equivalent to "can it even come close to doing my work for me" when the latter is generalised over all humans, which is the statement I was criticising on the grounds of the impact it has.

> Humans can't do this.

> We very specifically can't do this, we have sexual reproduction for a good reason.

Tautologically, humans operate at human level.

If you were responding to «"make a better version of itself" until that process hits a limit» — we've been doing, and continue to do, that with things like "education" and "medicine" and "sanitation". We've not hit our limits yet, as we definitely don't fully understand how DNA influences intelligence, nor how to safely modify it (plenty of unsafe ways to do so, though).

If you were responding to «followed by "maximise how many of you exist" until it runs out of resources», that's something all living things do by default. Despite the reduced fertility rates, our population is still rising.

And I have no idea what your point is about sexual reproduction, because it's trivial to implement a genetic algorithm in software, and we already do as a form of AI.

> (Also, since your scenario also has the robots working for free, they would instantly run out of resources to reproduce because they don't have any money. Similarly, an AGI will be unable to grow exponentially and take over the world because it would have to pay its AWS bill.)

First, I didn't say "for free", I was saying "competing with each other such that the profit margin tends towards zero", which is different.

Second, money is an abstraction to enable cooperation, it is not the resource itself. Money doesn't grow on trees, but apples do: just as plants don't use money but instead takes minerals out of the soil, carbon out of the air, and water out of both, so too a robot which mines and processes some trace elements, silicon, and iron ore into PV and steel has those products as resources even if it doesn't then go on to sell them to anyone. Inventing the first VN machine involves money, but only because the humans used to invent all the parts of that tech themselves want money while working on the process.

AI may still use money to coordinate, because it's a really good abstraction, but I wouldn't want to bet against superior coordination mechanisms replacing it at any arbitrary point in the future, neither for AI nor for humans.

> If the robot performs at human level, and it knows you'll always hire it over a human, why would it work for cheaper?

(1) competition with all the other robots who are trying to bid lower to get the business, i.e. Nash equilibrium of a free market

(2) I dispute the claim that "If you can program it to work for free, then it's subhuman." because all you have to do is give it a reward function that makes it want to make humans happy, and there are humans who value the idea of service as a reward all in its own right. Further, I think you are mixing categories by calling it "subhuman", as it sounds like an argument based on the value of its inner experience, where the economic result only requires the productive outputs — so for example, I would be surprised if it turned out Stable Diffusion models experienced qualia (making them "subhuman" in the moral value sense), but they're still capable of far better artistic output than most humans, to the extent that many artists are giving up on their profession (making them superhuman in the economic sense).

(3) One thing humans can do is program robots, which we're already doing, so if an AI were good enough to reach the standard I was objecting to, "can it even come close to doing my work for me" fully generalised over all humans, then the AI can program "subhuman" labour bots just as easily as we can, regardless of whether or not there turns out to be some requirement for qualia to enable performance in specific areas.


> If you were responding to «"make a better version of itself" until that process hits a limit» — we've been doing, and continue to do, that with things like "education" and "medicine" and "sanitation".

I think you have a conceptual confusion here. "Medicine" doesn't exist as an entity, and if it does, it doesn't do anything. People discover new things in the field of medicine. Those people are not medicine. (If they're claiming to be, they aren't, because of the principal-agent problem.)

> And I have no idea what your point is about sexual reproduction, because it's trivial to implement a genetic algorithm in software, and we already do as a form of AI.

Conceptual confusion again. Just because you call different things AI doesn't mean those things have anything in common or their properties can be combined with each other.

And the point is that sexual reproduction does not "make a better version of you". It forces you to cooperate with another person who has different interests than you.

Similarly, your ideas about robots building other little smaller robots who'll cooperate with each other… why are they going to cooperate with each other against you again? They don't have the same interests as each other because they're different beings.

> AI may still use money to coordinate, because it's a really good abstraction, but I wouldn't want to bet against superior coordination mechanisms replacing it at any arbitrary point in the future, neither for AI nor for humans.

Highly doubtful there could be one that wouldn't fall under the definition of money. The reason it exists is called the economic calculation problem (or the socialist calculation problem if you like); no amount of AI can be smart enough to make central planning work.

> (2) I dispute the claim that "If you can program it to work for free, then it's subhuman." because all you have to do is give it a reward function that makes it want to make humans happy

If it has a reward function it's subhuman. Humans don't have reward functions, which makes us infinitely adaptable, which means we always have comparative advantage over a robot.

> and there are humans who value the idea of service as a reward all in its own right.

It's recommended to still pay those people. That's because if you deliberately undercharge for your work, you'll run out of money eventually and die. (This is the actual meaning of efficient markets hypothesis / "people are rational" theory. It's not that people are magically rational. The irrational ones just go broke.)

Actually, it's also the reason economics is called "the dismal science". Slaveholders called it that because economists said it's inefficient to own slaves. It'd be inefficient to employ AI slaves too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: