This argument needlessly anthropomorphizes the models. They are not humans nor living entities, they are systems.
So, fine, the gemini-2.5-pro model hasn't gotten more intelligent.
What about the "Google AI Studio API" as a system? Or the "OpenAI chat completions API" as a system?
This system has definitely gotten vastly smarter based on the input it's gotten. Would you now concede, that if we look at the API-level (which, by the way, is the way you as the employer do interact with it) then this entity has gotten smarter way faster than the middle-schooler in the last 2.5 years?
But its the AI researchers that made it smarter, it isn't a self contained system like a child. If you fired the people maintaining it and it just interacted with people it would stop improving.
The brain of a child is not self-contained either. Neither is the entire complete child themselves — "It takes a village to raise a child", to quote the saying.
The entire reason we have a mandatory education system that doesn't stop with middle school (for me, middle school ended age 11), is that it's a way to improve kids.
1. The child didn't learn algebra on its own either. Aside from Blaise Pascal, most children learned those skills by having experienced humans teach them.
2. How likely is it that we're going to fire everyone maintaining those models in the next 7.5 years?
> The child didn't learn algebra on its own either. Aside from Blaise Pascal, most children learned those skills by having experienced humans teach them.
That is them interacting with an environment. We don't go and rewire their brain to make them learn math.
If you made an AI that we can put in a classroom and it learns everything needed to do any white collar job that way then it is an AGI. Of course just like a human different jobs would mean it needs different classes, but just like a human you can still make them learn anything.
> How likely is it that we're going to fire everyone maintaining those models in the next 7.5 years?
If you stop making new models? Zero chance the model will replace such high skill jobs. If not? Then that has nothing to do with whether current models are general intelligences.
Here's a question for you. If we take a model with open weights - say, LLaMA or Qwen - and give it access to learning materials as well as tools to perform training runs on its weights and dynamically reload those updated weights - would that constitute learning, to you? If not, then why not?
> Here's a question for you. If we take a model with open weights - say, LLaMA or Qwen - and give it access to learning materials as well as tools to perform training runs on its weights and dynamically reload those updated weights - would that constitute learning, to you? If not, then why not?
It does constitute learning, but it wont make it smart since it isn't intelligent about its learning like human brains are.
So, fine, the gemini-2.5-pro model hasn't gotten more intelligent. What about the "Google AI Studio API" as a system? Or the "OpenAI chat completions API" as a system?
This system has definitely gotten vastly smarter based on the input it's gotten. Would you now concede, that if we look at the API-level (which, by the way, is the way you as the employer do interact with it) then this entity has gotten smarter way faster than the middle-schooler in the last 2.5 years?