Hacker News new | past | comments | ask | show | jobs | submit login

If their AGI dreams would come true it might be more than enough to have 3 months head start. They probably won't, but it's interesting to ponder what the next few hours, days, weeks would be for someone that would wield AGI.

Like let's say you have a few datacenters of compute at your disposal and the ability to instantiate millions of AGI agents - what do you have them do?

I wonder if the USA already has a secret program for this under national defense. But it is interesting that once you do control an actual AGI you'd want to speed-run a bunch of things. In opposition to that, how do you detect an adversary already has / is using it and what to do in that case.




How many important problems are there where a 3 month head start on the data side is enough to win permanently and retain your advantage in the long run?

I'm struggling to think of a scenario where "I have AGI in January and everyone else has it in April" is life-changing. It's a win, for sure, and it's an advantage, but success in business requires sustainable growth and manageable costs.

If (random example) the bargain OpenAI strikes is "we spend every cent of our available capital to get AGI 3 months before the other guys do" they've now tapped all the resources they would need to leverage AGI and turn it into profitable, scalable businesses, while the other guys can take it slow and arrive with full pockets. I don't think their leadership is stupid enough to burn all their resources chasing AGI but it does seem like operating and training costs are an ongoing problem for them.

History is littered with first-movers who came up with something first and then failed to execute on it, only for someone else to follow up and actually turn the idea into a success. I don't see any reason to assume that the "first AGI" is going to be the only successful AGI on the market, or even a success at all. Even if you've developed an AGI that can change the world you need to keep it running so it can do that.

Consider it this way: Sam Altman & his ilk have been talking up how dangerous OpenAI's technology is. Are risk-averse businessmen and politicians going to be lining up to put their livelihood or even their lives in the hands of "dangerous technology"? Or are they going to wait 3-6 months and adopt the "safe" AGI from somebody else instead?


Well that's the thought exercise. Is there something you can do with almost unlimited "brains" of roughly human capability but much faster, within a few days / weeks / months. Lets say you can instantiate 1 million agents, for 3 months, and each of them is roughly 100x faster than a human, that means you have the equivalent of 100 million human-brain-hours to dump into whatever you want, as long as your plans don't require building too many real world things that actually require moving atoms around, I think you could do some interesting things. You could potentially dump a few million hours into "better than AGI AI" to start off for example, then go to other things. If they are good enough you might be able to find enough zero-days to disable any adversary through software, among other interesting things.


Where does "almost unlimited" come into the picture though? I see people talking like AGI will be unlimited when it will be limited by available compute resources, and like I suggested, being 'first' might come at the cost of the war chest you'd need to access those resources.

What does it take to instantiate 1 million agents? Who has that kind of money and hardware? Would they still have it if they burn everything in the tank to be first?


> Where does "almost unlimited" come into the picture though

>> Like let's say you have a few datacenters of compute at your disposal and the ability to instantiate millions of AGI agents - what do you have them do?

> has that kind of money and hardware?

Any hyperscaler plus most geopolitical main players. So the ones who matter.


Once you have AGI you use it to collect resources to cripple competitors and to build a snowball effect to make yourself unbeatable. 3 months of AGI is enough in the right hands to dominate the world economically.


Only if the AGI is cheaper than a human, in the case the AGI is more expensive than a human there wont be any snowballing. And the most likely case is that the first AGI is more expensive to run than a human, a few months of having overly expensive human level AI bots wont disrupt the world at all.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: