Hacker News new | past | comments | ask | show | jobs | submit login

> This is needlessly provocative,

Perhaps; this is something I find annoying enough that my responses may be unnecessarily sharp…

> and also wrong. My metrics have been the same from the very beginning (i.e. ‘can it even come close to doing my work for me?’). This question may yet come to evaluate to ‘yes’, but I think you seriously underestimate the real power of these models.

Okay then. (1) your definition is equivalent to "permanent mass unemployment" because if it can do your work for you, it can also do your work for someone else, (2) you mean either "over-estimate" or "real limits of these models", and the only reason I even bring up what's obviously a minor editing issue that I fall foul of myself on many comments is that this is the kind of mistake that people pick up on as evidence of the limits of AI — treating small inversions like this as evidence of uselessness.

> Is it conceivable that we could at some point slip into a world in which there is no funding for genuinely interesting media anymore because 90% of the population can’t distinguish it?

As written, what you describe is tautologically impossible. However, assuming you mean something more like "genuinely novel" rather than "interesting", absolutely! 100% yes. There's also loads of ways this could permanently end all human flourishing (even when used as a mere tool e.g. by dictators for propaganda), and some plausible ways it can permanently end all human existence (it's a safe bet someone will ask it to and try to empower it to this end, the question is how far they get with this).

> The real danger of genAI is that it convinces non-experts that the experts are replaceable when the reality is utterly different.

Despite the fact that the best models ace tests in medicine and law, the international mathematical olympiad, leetcode, etc., the fact there are no real tests for how good someone is after a few years of employment means both your point and mine can be true simultaneously. I'm thinking the real threat current LLMs pose to newspapers is that they fully automate the Gell-Mann Amnesia effect, even though they beat humans on every measure I had of intelligence when I was growing up, and depending on which measure exactly either all of humanity together by many orders of magnitude, or at worst putting them somewhere near the level of "rather good student taking the same test".

> In some cases this will lead to serious blowups and the real experts will be called back in, but in more ambiguous cases we’ll just quietly lose something of real value.

Hard disagree about "quiet loss". To the extent that value can be quantified, even if only by surveying humans, models can learn it. Indeed, this is already baked into the way ChatGPT asks you for feedback about the quality of the answers it generates. To the extent we lose things, it will be a very loud and noisy loss, possibly literally in the form of a nuke going off.




> (1) your definition is equivalent to "permanent mass unemployment" because if it can do your work for you, it can also do your work for someone else

This wouldn't happen because employment effects are mainly determined by comparative advantage, i.e. the resources that could be used to "do your job" will instead be used to do something they're more suited to.

(Not "that they're better at". it's "more suited to". You do not have your job because you're the best at it.)


I don't claim to be an expert in economics, so if you feel like answering please treat me as a noob, but doesn't comparative advantage have the implicit assumption that demand isn't ever going to be fully met for all buyers? The "single most economically important task" that a machine which can operate at a human (or superhuman) level, is "make a better version of itself" until that process hits a limit, followed by "maximise how many of you exist" until it runs out of resources. With assumptions that currently seem plausible such as "such a robot[0] might mass 100kg and take 5 months to turn plain metal ore into a working copy of itself", it takes about 30 years to convert the planet Mercury into 4.12e11 such robots per currently living human[1], which I assert is more than anyone can actually use even if they decided their next game of Civilization was going to be a 1:1 scale WestWorld-style LARP.

If I imagine a world where every task that any human can perform can also be done at world expert level — let alone at a superhuman level — by a computer/robot (with my implicit assumption "cheaply"), I can't imagine why I would ever choose the human option. If the comparative advantage argument is "the computer/robot combination will always be priced at exactly the level where it's cost-competitive with a human, in order that it can extract maximum profit", I ask why there won't be many AI/robots competing with each other for ever-smaller profit margins?

[0] AI and robotics are not the same things, one is body the other mind, but there's a lot of overlap with AI being used to drive robots, LLMs making it easier to define rewards and for the robots to plan; and AI also get better by having embodiment (even if virtual) giving them real world feedback.

[1] https://www.wolframalpha.com/input?i=5+months+*+log2%28mass+...


> The "single most economically important task" that a machine which can operate at a human (or superhuman) level, is "make a better version of itself" until that process hits a limit, followed by "maximise how many of you exist" until it runs out of resources.

Lot of hidden assumptions here. How does "operating at human level" (an assumption itself) imply the ability to do this? Humans can't do this.

We very specifically can't do this, we have sexual reproduction for a good reason.

(Also, since your scenario also has the robots working for free, they would instantly run out of resources to reproduce because they don't have any money. Similarly, an AGI will be unable to grow exponentially and take over the world because it would have to pay its AWS bill.)

> If I imagine a world where every task that any human can perform can also be done at world expert level — let alone at a superhuman level — by a computer/robot (with my implicit assumption "cheaply"), I can't imagine why I would ever choose the human option.

If the robot performs at human level, and it knows you'll always hire it over a human, why would it work for cheaper?

If you can program it to work for free, then it's subhuman.

If you're imagining something that's superhuman in only ways that are bad for you and subhuman in ways that would be good for you, just stop imagining it and you're good.


> Lot of hidden assumptions here. How does "operating at human level" (an assumption itself) imply the ability to do this?

Operating at human level is directly equivalent to "can it even come close to doing my work for me" when the latter is generalised over all humans, which is the statement I was criticising on the grounds of the impact it has.

> Humans can't do this.

> We very specifically can't do this, we have sexual reproduction for a good reason.

Tautologically, humans operate at human level.

If you were responding to «"make a better version of itself" until that process hits a limit» — we've been doing, and continue to do, that with things like "education" and "medicine" and "sanitation". We've not hit our limits yet, as we definitely don't fully understand how DNA influences intelligence, nor how to safely modify it (plenty of unsafe ways to do so, though).

If you were responding to «followed by "maximise how many of you exist" until it runs out of resources», that's something all living things do by default. Despite the reduced fertility rates, our population is still rising.

And I have no idea what your point is about sexual reproduction, because it's trivial to implement a genetic algorithm in software, and we already do as a form of AI.

> (Also, since your scenario also has the robots working for free, they would instantly run out of resources to reproduce because they don't have any money. Similarly, an AGI will be unable to grow exponentially and take over the world because it would have to pay its AWS bill.)

First, I didn't say "for free", I was saying "competing with each other such that the profit margin tends towards zero", which is different.

Second, money is an abstraction to enable cooperation, it is not the resource itself. Money doesn't grow on trees, but apples do: just as plants don't use money but instead takes minerals out of the soil, carbon out of the air, and water out of both, so too a robot which mines and processes some trace elements, silicon, and iron ore into PV and steel has those products as resources even if it doesn't then go on to sell them to anyone. Inventing the first VN machine involves money, but only because the humans used to invent all the parts of that tech themselves want money while working on the process.

AI may still use money to coordinate, because it's a really good abstraction, but I wouldn't want to bet against superior coordination mechanisms replacing it at any arbitrary point in the future, neither for AI nor for humans.

> If the robot performs at human level, and it knows you'll always hire it over a human, why would it work for cheaper?

(1) competition with all the other robots who are trying to bid lower to get the business, i.e. Nash equilibrium of a free market

(2) I dispute the claim that "If you can program it to work for free, then it's subhuman." because all you have to do is give it a reward function that makes it want to make humans happy, and there are humans who value the idea of service as a reward all in its own right. Further, I think you are mixing categories by calling it "subhuman", as it sounds like an argument based on the value of its inner experience, where the economic result only requires the productive outputs — so for example, I would be surprised if it turned out Stable Diffusion models experienced qualia (making them "subhuman" in the moral value sense), but they're still capable of far better artistic output than most humans, to the extent that many artists are giving up on their profession (making them superhuman in the economic sense).

(3) One thing humans can do is program robots, which we're already doing, so if an AI were good enough to reach the standard I was objecting to, "can it even come close to doing my work for me" fully generalised over all humans, then the AI can program "subhuman" labour bots just as easily as we can, regardless of whether or not there turns out to be some requirement for qualia to enable performance in specific areas.


> If you were responding to «"make a better version of itself" until that process hits a limit» — we've been doing, and continue to do, that with things like "education" and "medicine" and "sanitation".

I think you have a conceptual confusion here. "Medicine" doesn't exist as an entity, and if it does, it doesn't do anything. People discover new things in the field of medicine. Those people are not medicine. (If they're claiming to be, they aren't, because of the principal-agent problem.)

> And I have no idea what your point is about sexual reproduction, because it's trivial to implement a genetic algorithm in software, and we already do as a form of AI.

Conceptual confusion again. Just because you call different things AI doesn't mean those things have anything in common or their properties can be combined with each other.

And the point is that sexual reproduction does not "make a better version of you". It forces you to cooperate with another person who has different interests than you.

Similarly, your ideas about robots building other little smaller robots who'll cooperate with each other… why are they going to cooperate with each other against you again? They don't have the same interests as each other because they're different beings.

> AI may still use money to coordinate, because it's a really good abstraction, but I wouldn't want to bet against superior coordination mechanisms replacing it at any arbitrary point in the future, neither for AI nor for humans.

Highly doubtful there could be one that wouldn't fall under the definition of money. The reason it exists is called the economic calculation problem (or the socialist calculation problem if you like); no amount of AI can be smart enough to make central planning work.

> (2) I dispute the claim that "If you can program it to work for free, then it's subhuman." because all you have to do is give it a reward function that makes it want to make humans happy

If it has a reward function it's subhuman. Humans don't have reward functions, which makes us infinitely adaptable, which means we always have comparative advantage over a robot.

> and there are humans who value the idea of service as a reward all in its own right.

It's recommended to still pay those people. That's because if you deliberately undercharge for your work, you'll run out of money eventually and die. (This is the actual meaning of efficient markets hypothesis / "people are rational" theory. It's not that people are magically rational. The irrational ones just go broke.)

Actually, it's also the reason economics is called "the dismal science". Slaveholders called it that because economists said it's inefficient to own slaves. It'd be inefficient to employ AI slaves too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: