Is speed really an issue today though? With multiple cores, multiple machines etc.. Seems like a simple enough engineering problem. How many "pages" does a big site have? How many can be touched by a single update?
Also see Hugo, as someone mentioned..
Not doubting that you're right regardign the history, just that it has to be this way today. OTOH, a dynamic page can also be plenty fast, it does not have to be Wordpress.
The small answer is that rendering touches not just CPU but storage as well. Yes we have SSDs, which help, but writing to "disk" is still relatively slow.
The big answer is: If you can render each of your pages very quickly, there's no real win from pre-rendering everything. You should just render on the fly. The whole point of pre-rendering, historically at least, is to make a site very fast by eating the cost of the render up front.
But from that storage "problem" you get to avoid hosting on a dynamic platform, and literally just need a static web host. As mentioned you can put stuff up on s3 and be done with it.
At large scale of visitors, this kind of approach is a lot easier to handle than the dynamic model.
You could also get a static website generator to output e.g. PHP or HTML files with 99.99% static content and includes/SSI for commonly shared content.
This way few or no pages need to change just because the shard content (e.g. a "latest posts" box) changes when a new post is added.
And you still get the speed of serving, security, AND simplicity and easy hostability of merely serving static files.
Also see Hugo, as someone mentioned..
Not doubting that you're right regardign the history, just that it has to be this way today. OTOH, a dynamic page can also be plenty fast, it does not have to be Wordpress.