Hacker News new | past | comments | ask | show | jobs | submit login

It's justified if AGI is possible. If AGI is possible, then the entire human economy stops making sense as far as money goes, and 'owning' part of OpenAI gives you power.

That is of course, assuming AGI is possible and exponential, and that marketshare goes to a single entity instead of a set of entities. Lots of big assumptions. Seems like we're heading towards a slow-lackluster singularity though.




I was thinking about how the economy has been actively makes less sense and gets divorced more and more from reality year after year, AI or not.

It's the simple fact that the ability of assets to generate wealth has far outstripped the abiliy of individuals to earn money by working.

Somehow real estate has become so expensive everywhere that owning a shitty apartment is impossible for the vast majority.

When the world's population was exploding during the 20th century, housing prices were not a problem, yet somehow nowadays, it's impossible to build affordable housing to bring the prices down, though the population is stagnant or growing slowly.

A company can be worth $1B if someone invests $10m in it for 1% stake - where did the remaining $990m come from? Likewise, the stock market is full of trillion-dollar companies whose valuations beggar all explanation, considering the sizes of the markets they are serving.

The rich elites are using the wealth to control access to basic human needs (namely housing and healthcare) to squeeze the working population for every drop of money. Every wealth metric shows the 1% and the 1% of the 1% control successively larger portions of the economic pie. At this point money is ceasing to be a proxy for value and is becoming a tool for population control.

And the weird thing is it didn't use to be nearly this bad even a decade ago, and we can only guess how bad it will get in a decade, AGI or not.

Anyway, I don't want to turn this into a fully-written manifesto, but I have trouble expressing these ideas in a concise manner.


> Somehow real estate has become so expensive everywhere that owning a shitty apartment is impossible for the vast majority.

Approximately 2/3s of homes in the US are owner occupied.


It's interesting that the figure is similar in Australia, but from the POV of the people.

Approximately 2/3rds of Australians live in an owner-occupied home.


> When the world's population was exploding during the 20th century, housing prices were not a problem, yet somehow nowadays, it's impossible to build affordable housing to bring the prices down, though the population is stagnant or growing slowly.

In Canada, the population is still growing at a fairly impressive rate (https://www.macrotrends.net/global-metrics/countries/CAN/can...), and that growth tends to concentrate in major population centres. There are advocacy groups that seek to push Canadian population growth well above UN projections (e.g. the https://en.wikipedia.org/wiki/Century_Initiative "aims to increase Canada's population to 100 million by 2100") through immigration. In Japan, where the population is declining, housing prices are not anything like the problem we observe in North America.

There's also the supply side. "Impossible to build affordable housing" is in many cases a consequence of zoning restrictions. (Economists also hold very strongly that rent control doesn't work - see e.g. https://www.brookings.edu/articles/what-does-economic-eviden... and https://www.nmhc.org/research-insight/research-notes/2023/re... ; real "affordable housing" is just the effect of more housing.)


Somehow real estate has become so expensive everywhere that owning a shitty apartment is impossible for the vast majority.

That's to be expected when governments forbid people from building housing. The only thing I find surprising is when people blame this on "capitalism".


> And the weird thing is it didn't use to be nearly this bad even a decade ago, and we can only guess how bad it will get in a decade, AGI or not.

The last 5 years have reflected a substantial decline in QOL in the states; you don't even have to to look back that far.

The coronacircus money-printing really accelerated the decline.


If AGI is possible, then the entire human economy stops making sense as far as money goes, and 'owning' part of OpenAI gives you power.

That's if AGI is possible and not easily replicated. If AGI can be copied and/or re-developed like other software then the value of owning OpenAI stock is more like owning stock in copper producers or other commodity sector companies. (It might even be a poorer investment. Even AGI can't create copper atoms, so owners of real physical resources could be in a better position in a post-human-labor world.)


This belief comes from confusing the singularity (every atom on Earth is converted into a giant image of Sam Altman) with AGI (a store employee navigates a confrontation with an unruly customer, then goes home and wins at Super Mario).


If I recall correctly, these terms were used more or less interchangeably for a few decades, until 2020 or so, when OpenAI started making actual progress towards AGI, and it was clear that the type of AGI that could be imagined at that point, would not be of the type that would produce singularity.


Exactly. I continually fail to see how "the entire human economy ends" overnight with another human like agent out there - especially if its confined to a server in the first place - it can't even "go home" :)


But what if that AGI can fit inside a humanoid robot and that robot is capable of self replication even if it means digging the sand out of the ground to make silicon with a spade?


We already have humanoid intelligeces that self-assemble and power from common materials, as a colony of incredibly advanced nanobots.


Yes. The goal is to emulate that with different substrates to understand how it works and to have better control over existing self-replicating systems.


The first AGI will have such an advantage. It’ll be the first thing that is smart and tireless, can do anything from continuously hacking enemy networks to trading across all investment classes, to basically taking over the news cycle on social media. It would print money and power.


Depends on how efficient it is. If it requires more processing power than we have to do all these things competitors will have time to catch up while new hardware is created.


The GP said, "and exponential". If AGI is exponential, then the first one will have a head start advantage that compounds over time. That is going to be hard to overcome.


I believe that AGI cannot be exponential for long because any intelligent agent can only approach nature's limits asymptotically. The first company with AGI will be about as much ahead as, say, the first company with electrical generators [1]. A lot of science fiction about a technological singularity assumes that AGI will discover and apply new physics to develop currently-believed-impossible inventions, but I don't consider that plausible myself. I believe that the discovery of new physics will be intellectually satisfying but generally inapplicable to industry, much like how solving the cosmological lithium problem will be career-defining for whoever does it but won't have any application to lithium batteries.

https://en.wikipedia.org/wiki/Cosmological_lithium_problem

[1] https://en.wikipedia.org/wiki/Siemens#1847_to_1901


I don't recall editing my message, but HN can be wonky sometimes. :)

Nothing is truly exponential for long, but the logistic curve could be big enough to do almost anything if you get imaginative. Without new physics, there are still some places where we can do some amazing things with the equivalent of several trillion dollars of applied R&D, which AGI gets you.


This depends on what a hypothetical 'AGI' actually costs. If a real AGI is achieved, but it costs more per unit of work than a human does... it won't do anyone much good.


Sure but think of the Higgs... how long that took for just _one_ particle. You think an AGI, or even an ASI is going to make an experimental effort like that go any bit faster? Dream on!

It astounds me that people dont realize how much of this cutting edge science stuff literally does NOT happen overnight, and not even close to that; typically it takes on the order of decades!


Science takes decades, but there are many places where we could have more amazing things if we spent 10 times as much on applied R&D and manufacturing. It wouldn't happen overnight, but it will be transformative if people can get access to much more automated R&D. We've seen a proliferation in makers over the last few decades as access to information is easier, and with better tools individuals will be able to do even more.

My point being that even if Science ends today, we still have a lot more engineering we can benefit from.


I had to edit my message just now because I was actually unsure if you edited. Sorry for any miscommunication.


If AGI is invented and the inventor tries to keep it secret then everyone in the world will be trying to steal it. And funding to independently create it would become effectively unlimited once it has been proven possible, much like with nuclear weapons.


We may not need smarter AI. Just less stupid AI.

The big problem with LLMs is that most of the time they act smart, and some of the time they do really, really dumb things and don't notice. It's not the ceiling that's the problem. It's the floor. Which is why, as the article points out, "agents" aren't very useful yet. You can't trust them to not screw up big-time.


> If AGI is possible, then the entire human economy stops making sense as far as money goes,

What does this mean in terms of making me coffee or building houses?


If we can simulate a full human intelligence at a reasonable speed, we can simulate 100 of them and ask the AGI to figure out how to make itself 10x faster.

Rinse and repeat.

That is exponential take off.

At the point where you have an army of AIs running at 1000x human speed it can just ask it to design the mechanisms for and write the code to make robots that automate any possible physical task.


There are about 8 billion human intelligences walking around right now and they've got no idea how to begin making even a stupid AGI, let alone a superhuman one. Where does the idea that 100 more are going to help come from?


This was my argument a long time ago. The common counter was that we’d have a bunch of geniuses that knew tons of research. Well, we probably already have millions of geniuses. If anything, they use their brains for self-enrichment (eg money, entertainment) or on a huge assortment of topics. If all the human geniuses didn’t do it, then why would the AGI instances do it?

We also have people brilliant enough to maybe solve the AGI problem or cause our extinction. Some are amoral. Many mechanisms pushed human intelligences in other directions. They probably will for our AGI’s assuming we even give them all the power unchecked. Why are they so worried the intelligent agents will not likewise be misdirected or restrained?

What smart, resourceful humans have done (and not done) is a good, starting point for what AGI would do. At best, they’ll probably help optimize some chips and LLM runtimes. Patent minefields with sub-28nm design, especially mask-making, will keep unit volumes of true AGI’s much lower at higher prices than systems driven by low-paid workers with some automation.


This sounds like magic, not science.


What do you mean by this? Is there any fundamental property of intelligence, physicality, or the universe, that you think wouldn't let this work?


Not OP but yes. Electron size vs band gap, computing costs (in terms of electricity) any other raw materials needed for that energy, etc... sigh... its physics, always physics... what fundamental property of physics do you think would let a vertical take off in intelligence occur?


If you look at the rate of mathematical operations conducted, we're already going hard vertical. Physics and material limitations will slow that eventually as we reach a marginal return on converting the planet to computer chips, but we're in the singularity as proxy measured by mathematical operations.


> If you look at the rate of mathematical operations conducted, we're already going hard vertical.

Not if you remember to count all the computations being done by the quintillions of nanobots across the world known as "human cells."

That's not only inside cells, and not just neurons either. For example, your thyroid is busy brute-forcing the impossibly large space of antibody combinations, and putting every candidate cell-release through a very rigorous set of acceptance tests.


The human brain still has orders of magnitude more processing power than LLMs. Even if we develop superintelligence the current hardware cant run it which gives competitors time to catch up.


Nothing and the hilarious thing is that the AI figureheads admit that technology (as in defined by new theorems produced and new code written), will do pathetically little to move the needle on human happiness forward.

The guy running Anthropic thinks the future is in biotech, developing the cure to all diseases, eternal youth etc.

Which is technology all right, but it's unclear to me how these chatbots (or other AI systems) are the quickest way to get there.


I think it's definitely the case that AI has certain really useful niches, but it is hard to know the ones that will really make enough money to make training an AI worth it. E.g. "parse a million company statements into these criteria and invest based on an algorithm on those criteria" might be very valuable. Maybe someone's doing it. But I struggle to think it'd be worth billions.


> If AGI is possible, then the entire human economy stops making sense as far as money goes

I heard people on HN saying this (even without the money condition) and I fail to grasp the reasoning behind it. Suppose in a few years Altman announces a model, say o11, that is supposedly AGI, and in several benchmarks it hits over 90%. I don't believe it's possible with LLMs because of their inherent limitations but let's assume it can solve general tasks in a way similar to an average human.

Now, how come that "the entire human economy stops making sense"? In order to eat, we need farmers, we need construction workers, shops etc. As for white collar workers, you will need a whole range of people to maintain and further develop this AGI. So IMHO the opposite is true: the human economy will work exactly as before but the job market will continue to evolve withe people using AGI in a similar way that they use LLMs now but probably with greater confidence. (Or not.)


The thinking goes: - any job that can be done on a computer is immediately outsourced to AI, since the AI is smarter and cheaper than humans - humanoid robots are built that are cheap to produce, using tech advances that the AI discovered - any job that can be done by a human is immediately outsourced to a robot, since the robot is better/faster/stronger/cheaper than humans


If you think about all the people trying to automate away farming, construction, transport/delivery - these people doing the automation themselves get automated out first, and the automation figures out how to do the rest. So a fully robotic economy is not far off, if you can achieve AGI.


Why do we work? Ultimately, we work to live.* If the value of our labor is determined by scarcity, then what happens when productivity goes nearly infinite and the scarcity goes away? We still have needs and wants, but the current market will be completely inverted.


One strata in that assumption-heap to call out explicitly: Assuming LLMs are an enabling route to AGI and not a dead-end or supplemental feature.


Well, AGI would make the brainy information worker part of the economy obsolete. Well still need the jobs that interact with the physical world for quite a while. So… all us HN types should get ready to work the mines or pick vegetables


If we hit true AGI, physical labor won’t be far behind the knowledge workers. The first thing industrial manufacturers will do is turn it towards designing robotics, automating the design of factories, and researching better electromechanical components like synthetic muscle to replace human dexterity.

IMO we’re going to hit the point where AI can work on designing automation to replace physical labor before we hit true AGI, much like we’re seeing with coding.


If AGI is possible then that too becomes a commodity and we experience a massive round of deflation in the cost of everything not intrinsically rare. Land, food, rare materials, energy, and anything requiring human labor is expensive and everything else is almost free.

I don't see how OpenAI wouldn't crash and burn here. Given the history of models it would be at most a year before you'd have open AGI, then the horse is out of the barn and the horse begins to self-improve. Pretty soon the horse is a unicorn, then it's a Satyr, and so on.

(I am a near-term AGI skeptic BTW, but I could be wrong.)

OpenAI's valuation is a mixture of hype speculation and the "golden boy" cult around Sam Altman. In the latter sense it's similar to the golden boy cults around Elon Musk and (politically) Donald Trump. To some extent these cults work because they are self-fulfilling feedback loops: these people raise tons of capital (economic or political) because everyone knows they're going to raise tons of capital so they raise tons of capital.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: