> A technology so advanced that not only does it subsume all other technology, but it is able to improve itself.
The problem is, a computer has no idea what "improve" means unless a human explains it for every type of problem. And of course a human will have to provide guidelines about how long to think about the problem overall, which avenues to avoid because they aren't relevant to a particular case, etc. In other words, humans will never be able to stray too far from the training process.
We will likely never get to the point where an AGI can continuously improve the quality of its answers for all domains. The best we'll get, I believe, is an AGI that can optimize itself within a few narrow problem domains, which will have limited commercial application. We may make slow progress in more complex domains, but the quality of results--and the ability for the AGI to self-improve--will always level off asymptotically.
Huh? Humans are not anywhere near the limit of physical intelligence, and we have many existence proofs that we (humans) can design systems that are superhuman in various domains. "Scientific R&D" is not something that humans are even particularly well-suited to, from an evolutionary perspective.
The problem is, a computer has no idea what "improve" means unless a human explains it for every type of problem. And of course a human will have to provide guidelines about how long to think about the problem overall, which avenues to avoid because they aren't relevant to a particular case, etc. In other words, humans will never be able to stray too far from the training process.
We will likely never get to the point where an AGI can continuously improve the quality of its answers for all domains. The best we'll get, I believe, is an AGI that can optimize itself within a few narrow problem domains, which will have limited commercial application. We may make slow progress in more complex domains, but the quality of results--and the ability for the AGI to self-improve--will always level off asymptotically.