If we can simulate a full human intelligence at a reasonable speed, we can simulate 100 of them and ask the AGI to figure out how to make itself 10x faster.
Rinse and repeat.
That is exponential take off.
At the point where you have an army of AIs running at 1000x human speed it can just ask it to design the mechanisms for and write the code to make robots that automate any possible physical task.
There are about 8 billion human intelligences walking around right now and they've got no idea how to begin making even a stupid AGI, let alone a superhuman one. Where does the idea that 100 more are going to help come from?
This was my argument a long time ago. The common counter was that we’d have a bunch of geniuses that knew tons of research. Well, we probably already have millions of geniuses. If anything, they use their brains for self-enrichment (eg money, entertainment) or on a huge assortment of topics. If all the human geniuses didn’t do it, then why would the AGI instances do it?
We also have people brilliant enough to maybe solve the AGI problem or cause our extinction. Some are amoral. Many mechanisms pushed human intelligences in other directions. They probably will for our AGI’s assuming we even give them all the power unchecked. Why are they so worried the intelligent agents will not likewise be misdirected or restrained?
What smart, resourceful humans have done (and not done) is a good, starting point for what AGI would do. At best, they’ll probably help optimize some chips and LLM runtimes. Patent minefields with sub-28nm design, especially mask-making, will keep unit volumes of true AGI’s much lower at higher prices than systems driven by low-paid workers with some automation.
Not OP but yes. Electron size vs band gap, computing costs (in terms of electricity) any other raw materials needed for that energy, etc... sigh... its physics, always physics... what fundamental property of physics do you think would let a vertical take off in intelligence occur?
If you look at the rate of mathematical operations conducted, we're already going hard vertical. Physics and material limitations will slow that eventually as we reach a marginal return on converting the planet to computer chips, but we're in the singularity as proxy measured by mathematical operations.
> If you look at the rate of mathematical operations conducted, we're already going hard vertical.
Not if you remember to count all the computations being done by the quintillions of nanobots across the world known as "human cells."
That's not only inside cells, and not just neurons either. For example, your thyroid is busy brute-forcing the impossibly large space of antibody combinations, and putting every candidate cell-release through a very rigorous set of acceptance tests.
The human brain still has orders of magnitude more processing power than LLMs. Even if we develop superintelligence the current hardware cant run it which gives competitors time to catch up.
Rinse and repeat.
That is exponential take off.
At the point where you have an army of AIs running at 1000x human speed it can just ask it to design the mechanisms for and write the code to make robots that automate any possible physical task.