> If you were responding to «"make a better version of itself" until that process hits a limit» — we've been doing, and continue to do, that with things like "education" and "medicine" and "sanitation".
I think you have a conceptual confusion here. "Medicine" doesn't exist as an entity, and if it does, it doesn't do anything. People discover new things in the field of medicine. Those people are not medicine. (If they're claiming to be, they aren't, because of the principal-agent problem.)
> And I have no idea what your point is about sexual reproduction, because it's trivial to implement a genetic algorithm in software, and we already do as a form of AI.
Conceptual confusion again. Just because you call different things AI doesn't mean those things have anything in common or their properties can be combined with each other.
And the point is that sexual reproduction does not "make a better version of you". It forces you to cooperate with another person who has different interests than you.
Similarly, your ideas about robots building other little smaller robots who'll cooperate with each other… why are they going to cooperate with each other against you again? They don't have the same interests as each other because they're different beings.
> AI may still use money to coordinate, because it's a really good abstraction, but I wouldn't want to bet against superior coordination mechanisms replacing it at any arbitrary point in the future, neither for AI nor for humans.
Highly doubtful there could be one that wouldn't fall under the definition of money. The reason it exists is called the economic calculation problem (or the socialist calculation problem if you like); no amount of AI can be smart enough to make central planning work.
> (2) I dispute the claim that "If you can program it to work for free, then it's subhuman." because all you have to do is give it a reward function that makes it want to make humans happy
If it has a reward function it's subhuman. Humans don't have reward functions, which makes us infinitely adaptable, which means we always have comparative advantage over a robot.
> and there are humans who value the idea of service as a reward all in its own right.
It's recommended to still pay those people. That's because if you deliberately undercharge for your work, you'll run out of money eventually and die. (This is the actual meaning of efficient markets hypothesis / "people are rational" theory. It's not that people are magically rational. The irrational ones just go broke.)
Actually, it's also the reason economics is called "the dismal science". Slaveholders called it that because economists said it's inefficient to own slaves. It'd be inefficient to employ AI slaves too.
I think you have a conceptual confusion here. "Medicine" doesn't exist as an entity, and if it does, it doesn't do anything. People discover new things in the field of medicine. Those people are not medicine. (If they're claiming to be, they aren't, because of the principal-agent problem.)
> And I have no idea what your point is about sexual reproduction, because it's trivial to implement a genetic algorithm in software, and we already do as a form of AI.
Conceptual confusion again. Just because you call different things AI doesn't mean those things have anything in common or their properties can be combined with each other.
And the point is that sexual reproduction does not "make a better version of you". It forces you to cooperate with another person who has different interests than you.
Similarly, your ideas about robots building other little smaller robots who'll cooperate with each other… why are they going to cooperate with each other against you again? They don't have the same interests as each other because they're different beings.
> AI may still use money to coordinate, because it's a really good abstraction, but I wouldn't want to bet against superior coordination mechanisms replacing it at any arbitrary point in the future, neither for AI nor for humans.
Highly doubtful there could be one that wouldn't fall under the definition of money. The reason it exists is called the economic calculation problem (or the socialist calculation problem if you like); no amount of AI can be smart enough to make central planning work.
> (2) I dispute the claim that "If you can program it to work for free, then it's subhuman." because all you have to do is give it a reward function that makes it want to make humans happy
If it has a reward function it's subhuman. Humans don't have reward functions, which makes us infinitely adaptable, which means we always have comparative advantage over a robot.
> and there are humans who value the idea of service as a reward all in its own right.
It's recommended to still pay those people. That's because if you deliberately undercharge for your work, you'll run out of money eventually and die. (This is the actual meaning of efficient markets hypothesis / "people are rational" theory. It's not that people are magically rational. The irrational ones just go broke.)
Actually, it's also the reason economics is called "the dismal science". Slaveholders called it that because economists said it's inefficient to own slaves. It'd be inefficient to employ AI slaves too.