people persist in building systems that are less advanced than e.g. Multics and ITS [and] hypervisors
I find it interesting that the VMware-style hypervisors were originally implemented not using, but in spite of all the virtualizaton features of the CPU. Yet today VM virtualization is considered the most reliable security boundary on shared hardware. No large security-conscious company would share user accounts on a Windows instance with untrusted parties, yet they will share virtual private servers.
I think what it says it that chip designers are lousy at developing what OS developers want and OS designers are lousy at developing what customers want.
286
Remember, this chip was designed before anyone realized that DOS (and DOS-based device drives) was going to take over the world. They never imagined anyone would want to switch from protected mode back to the obviously-inferior real mode. :-)
They were darn lucky IBM thought to put that auxiliary keyboard controller there to do a reset on the main CPU.
"I find it interesting that the VMware-style hypervisors were originally implemented not using, but in spite of all the virtualizaton features of the CPU."
Errr, I've not directly studied this but I've read that the x86_32 architecture is particularly hard to virtualize, and that VMWare does it by rewriting the binaries it runs and Xen obviously started out by paravirtualizing the hard stuff.
You're second point speaks more to how horrible Windows security is than anything else, I'd say. The security conscious are more willing to do that sort of sharing in the UNIX(TM) world, but of course it started out as a classic time sharing system. And then Linux at least got seriously hardened by a whole bunch of people, most notably the part of the NSA that does this sort of thing (it's no accident a lot of SELinux is very familiar to someone who knows and/or studied Multics).
But all that said, separate VMs raise very high walls between parts of a system that must be protected from each other. A system with very serious real world security requirements that I'd providing some advice on right now is using XCP and separate VMs to help achieve that. Of course it helps that nowadays we have CPU to burn (and with caches bigger than the main memory of any machine I used in the very early '80s) and that e.g. 2 4 GB sticks of fast DRAM (DDR3 1333) cost ~$80. So I suppose this is in part a "if you have it, why not use it?"
It's certainly the case "that chip designers are lousy at developing what OS developers want" (see the 286, AMD's dropping of rings, the 68000's inability to safely page ... although all of these examples at least have been or will be fixed). As for "OS designers are lousy at developing what customers want" ... very possibly. It's certainly a problem that developing a serious OS is for almost everyone a once in a lifetime effort (David Cutler was famous for doing 3 or so). There's also the curse of backwards comparability, which has also cursed the CPU designers.
But it's worse than that. It's hard to keep the original vision going, and counterwise sometimes part of that vision is wrong or rather becomes wrong as things change. E.g. we all thought the Windows registry was a great idea ... and then it got dreadfully abused (my favorite: using autogenerated 8.3 file names as values, making restores problematic). There are some great and I gather correct screeds about Linus refusing to create an HBI so that keeping existing device drivers from regressing is ... a very big problem. (Of course, that a big competitive advantage for Linux as well, but ... not a very nice one).
I find it interesting that the VMware-style hypervisors were originally implemented not using, but in spite of all the virtualizaton features of the CPU. Yet today VM virtualization is considered the most reliable security boundary on shared hardware. No large security-conscious company would share user accounts on a Windows instance with untrusted parties, yet they will share virtual private servers.
I think what it says it that chip designers are lousy at developing what OS developers want and OS designers are lousy at developing what customers want.
286
Remember, this chip was designed before anyone realized that DOS (and DOS-based device drives) was going to take over the world. They never imagined anyone would want to switch from protected mode back to the obviously-inferior real mode. :-)
They were darn lucky IBM thought to put that auxiliary keyboard controller there to do a reset on the main CPU.