In terms of trajectory, I'm not really convinced we can do much better than we are now. Moore's law is ending in the next decade as we hit fundamental physical limitations in how many transistors we can pack on a chip. The growth in computational power is going to slow down considerably at a time when ai companies are struggling to get more gpu compute to train new models and run inference. OpenAI themselves supposedly stopped sign ups because they're running out of computational resources, to the point that it's impacting research. I don't see us getting much better than GPT-4 on Von Neumann machines without expelling ludicrous amounts of capital or a see change in how we fundamentally do computation. It's hard to get around the fact that LLMs are already extremely expensive to run.