LLMs are a key piece of understanding that token sequences can trigger actions in the real world. AGI is here. You can trivially spin up a computer using agent to self improve itself to being a competent office worker
Agents can trivially self improve. I'd be happy to show you - contact me at [email protected]
Why wouldn't you hand me 35 million dollars right now if I can clearly illustrate to you that I have technology you haven't seen? Edge. Maybe you know something I don't, or maybe you just haven't seen it. While loops go hard ;)
They don't need to release their internal developments to you to show that they can scale their plan - they can show incremental improvements to benchmarks. We can instruct the AI over time to get it to be superhuman, no need for any fundamental innovations anymore
Keep in mind that the actual test is adversarial - a human is simultaneously chatting via text with a human and a program, knowing that one of them is not human, and trying to divine which is an artificial machine.
Tokens don't need to be text either, you can move to higher level "take_action" semantics where "stream back 1 character to session#117" as every single function call. Training cheap models that can do things in the real world is going to change a huge amount of present capabilities over the next 10 years