Hacker News new | past | comments | ask | show | jobs | submit login

People really don't get it. The Frontier Labs' ability to make money right now is irrelevant. If you build the first AGI you win capitalism. That's what is at stake here. That's potentially worth Trillions/Quadrillions(infinite?) money. Everything else is noise.



Why? Where does the money come from? What's the product?

If AGI destroys millions of jobs, who is paying for products?


Oh believe me I'm right there with you but in the few timelines we don't all die this is clearly what's at stake. Altman can crown himself Emperor of the Galaxy...


Emperor, right. 1 month in jail and he gets back to his senses.

> If you build the first AGI you win capitalism. That's what is at stake here.

Why? If you had a GPT version right now that was even more intelligent than an average human, what exactly would change?

The big assumption is that you could leverage that into an even bigger lead, but this seems questionable to me-- just throwing twice the compute at it is not going to make it twice as clever. Considering recent efforts, throwing more man(?)-hours into self improvement is also highly likely to run into quickly diminishing returns.

Sure, you might get significant market share for services that currently require a human-in-front-of-screen and nothing else, but what prevents your competition from building the exact same thing?


>Why? If you had a GPT version right now that was even more intelligent than an average human, what exactly would change?

This obviates all human labor that can be done on a PC. If you can't imagine how insane that is then I can't help you.


> This obviates all human labor that can be done on a PC

Sure, this is massive for the actual people doing such things right now (designers, programmers, callcenter employees) because their work becomes almost worthless, and the impacted sector of our economy is pretty large.

You can not hope to earn what a human used to get for the same work however, because that is not how markets work (and this should be very obvious on second though, for examples just consider how ChatGPT is not priced like a human personal assistent/concierge service, or how agriculture went from a "half the economy" status pre-industrialization to a reluctantly subsidised, barely profitable economical footnote nowadays).

So AI is not "winning" capitalism-- because the piece of the economic pie that you can hope to get is not the size of the "human-in-front-of-screen" sector-- you are basically just gonna compete to provide computing services, just like AWS or OpenAI today (which certainly can be profitable, but not really in a capitalism-shattering way).

Don't get me wrong-- I'm not saying that a human-level AI would not be an immensely impactful achievement with far reaching implications, but I think you are largely overestimating the expected economical gain for the "inventor".


I think you seriously lack imagination... You've read too much Tyler Cowen and imagine AGI only adding .5% to GDP per year lol. Instead of what it actually is which is the beginning of the Singularity.


> You've read too much Tyler Cowen and imagine AGI only adding .5% to GDP per year

I did not know who Tyler Cowen is.

I'm not making any assumptions about how much growth we would get from machine intelligence, I'm simply stating that the value you can expect from that whole new sector is gonna be much closer to the cost in silicon and energy to provide it than the former value of the human labor it replaces (history reinforces that view).

In my view, building machine intelligence makes you into a (groundbreaking, innovative) commodity provider medium-term at best, not a god.

I'm personally extremely skeptical of singularity scenarios in general.

Most models of it implicitly assume a sub-exponential, somewhat proportional or even constant relation between achieved "intelligence/utility" and required energy (thanks to "self improvement"), which is an extremely flawed assumption in my view. But what kind of singularity do you actually believe in?

I do agree that machine intelligence is a significant existential risk for our species, though.


>I'm simply stating that the value you can expect from that whole new sector is gonna be much closer to the cost in silicon and energy to provide it than the former value of the human labor it replaces (history reinforces that view).

Again lacking imagination. This not only replaces human labor but enables entirely new types of endeavors. Did ICs simply replace the value of Vacuum tubes? Did the Internet simply replace the value of Snail Mail or Libraries? You just aren't framing the problem correctly.

>I'm personally extremely skeptical of singularity scenarios in general.

Naturally...

>Most models of it implicitly assume a sub-exponential, somewhat proportional or even constant relation between achieved "intelligence/utility" and required energy (thanks to "self improvement"), which is an extremely flawed assumption in my view.

Until you get close to the efficiency of a human brain this is a total nothingburger and AI undergoing RSI has enormous room to improve purely within the algorithmic space. Its virtually certain that an AGI could run on average consumer hardware.


> This not only replaces human labor but enables entirely new types of endeavors.

I completely agree with that! I think we are talking past each other a bit, because to me all your examples seem supportive of my main point: None of those revolutions in tech really made their inventors "break capitalism", but instead the "commodity providers" of the digital age made good business together with all the new industry that sprouted up and used the tech...

> Its virtually certain that an AGI could run on average consumer hardware.

Sure. But then what? I can totally see you getting the equivalent of 10 human white-collars for a kilowatt within a decade or two. (Sidenote: I don't believe we'll ever get past the 0.1W/human-equivalent brain in efficiency, and even that is an optimistic estimate that I would not stake anything on).

You might be able to further improve those systems, getting up to a hundred or even a thousand clever humans-- a veritable army of consultants/contractors at everyones fingertips.

And thats great! But an army of consultants for almost free is NOT a singularity-- it is extremely doubtful to me that you are even gonna get "the sum of its parts" out of such an arrangement; expecting such a construct to be capable of rapid self improvement is extremely optimistic (just compare real-life armies of consultants, which are typically agreed to be not capable of rapid self improvement, even if you throw exponentially increasing amounts of them at it).

Just like real life armies of consultants, we will have to be careful that the work is actually aligned with our interests though, and having many more consultants than actual people is dangerous business already...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: