I think that's more to do with how we perceive competence as static. For all the benefits the education system touts, where it matters it's still reduced to talent.
But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?
Yes, we can imagine that there's an upper limit to how smart a single system can be. Even suppose that this limit is pretty close to what humans can achieve.
But: you can still run more of these systems in parallel, and you can still try to increase processing speeds.
Signals in the human brain travel, at best, roughly at the speed of sound. Electronic signals in computers play in the same league as the speed of light.
Human IO is optimised for surviving in the wild. We are really bad at taking in symbolic information (compared to a computer) and our memory is also really bad for that. A computer system that's only as smart as a human but has instant access to all the information of the Internet and to a calculator and to writing and running code, can already be effectively act much smarter than a human.
But for the same reasons that we can't train the an average joe into Feynman, what makes you think we have the formal models to do it in AI?