There can be more than one problem. The history of computing (or even just the history of AI) is full of things that worked better and better right until they hit a wall. We get diminishing returns adding more and more training data. It’s really not hard to imagine a series of breakthroughs bringing us way ahead of LLMs.