I believe that paper predates the introduction of speculation-blocking MPKs. Could you build a single-address space OS out of those, without hitting problems with Spectre attacks? It's an open research question but my gut says yes. MPKs are limited so you may need an equivalent of swapping with fallback to page table based isolation, but it's worth noting that in a SASOS the notion of a process is unpacked, so you can then add on top newly defined hardware enforced privacy domains that don't cleanly map to any existing notion of a process.
For example all code from a particular vendor (origin) could share a single MPK whilst running, even if the failure ___domain for things like fault isolation is finer grained.
> I believe that paper predates the introduction of speculation-blocking MPKs
That isn't enough, because you can induce misspeculation through paths that do (or would) have access to appropriate MPKs and do almost anything you want, including disclosing information through sidechannels you do have access to. Loading an MPK is akin to changing address space protections; it has to be a hard barrier that cannot be speculated through. You cannot even have code mapped that would have access to those MPKs, as you can induce misspeculation into this code.
> It's an open research question but my gut says yes.
My gut says no. There is just too much dark stuff going on in hardware. Sidechannels are outside of models. You can't verify anything until you have not the model, but the whole design of the chip.
Also, variant 4 is not addressed much in the literature. I couldn't write what I wanted to write because of NDAs, but I have personally written PoCs that basically say hardware has to turn off memory speculation or you end up with a universal read gadget again. There is no software solution for variant 4.
I'm told, but haven't verified, that in older Intel CPUs loading an MPK wasn't a speculation barrier, but in newer CPUs it is. In other words changing your current MPK is like changing address spaces but much faster because there's no TLB flush or actual context switch.
I think there are also other caveats to consider. A lot of Spectre research (like your paper) is implicitly focused on the web renderer/V8 use case but here we're discussing theoretical operating system designs. Let's say you sandbox an image decoder library in-process, using type security and without using MPKs. Is this useless? No, because even if the image decoder is:
a. Maliciously doing speculation attacks.
b. Somehow this doesn't get noticed during development.
... the sandbox means it won't have any ability to make architectural-level changes. It can spy on you but its mouth is sealed; it doesn't have the ability to do any IO due to the architectural sandbox. To abuse Spectre in this context would require something really crazy, like trying to speculatively walk the heap, find the data you're looking for, encrypt it, steganographically encode the result into the images it decodes, and then hope that somehow those images make it back to the attackers even though the destination is probably just the screen. This isn't even NSA-level stuff, it's more like Hollywood at that point.
Compare to the situation today: the image decoder (or whatever) is a buggy C library running in a process with network access because it's a web origin. Game over.
I worry that the consequence of Spectre research has been that people conclude "in-process sandboxing is useless, cross-process is too hard, oh well, too bad so sad". Whereas in reality in-process sandboxing even without MPKs or equivalent would still be a massive win for computer security in many contexts where Spectre is hardly your biggest problem.
> I worry that the consequence of Spectre research has been that people conclude "in-process sandboxing is useless, cross-process is too hard, oh well, too bad so sad". Whereas in reality in-process sandboxing even without MPKs or equivalent would still be a massive win for computer security in many contexts where Spectre is hardly your biggest problem.
Well, I agree that in-process sandboxing is still quite useful; it at least closes the barn door. But the rest of the conclusion is not what we made in Chrome; we had to go whole-hog multi-process for site isolation. That and just moving as much as possible out of the renderer process so that there aren't many secrets left to steal.
It's really an issue for situations where a process (or platform) is required to run untrusted code from lots of different sources. There isn't a software solution that is robust to side channels yet. They can still spy on each other. Clearly, two import cases that Google cares about are Cloud and the web.
For example all code from a particular vendor (origin) could share a single MPK whilst running, even if the failure ___domain for things like fault isolation is finer grained.